Oct 09 15:50:58 localhost kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct 09 15:50:58 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 09 15:50:58 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 09 15:50:58 localhost kernel: BIOS-provided physical RAM map:
Oct 09 15:50:58 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 09 15:50:58 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 09 15:50:58 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 09 15:50:58 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 09 15:50:58 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 09 15:50:58 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 09 15:50:58 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 09 15:50:58 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 09 15:50:58 localhost kernel: NX (Execute Disable) protection: active
Oct 09 15:50:58 localhost kernel: APIC: Static calls initialized
Oct 09 15:50:58 localhost kernel: SMBIOS 2.8 present.
Oct 09 15:50:58 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 09 15:50:58 localhost kernel: Hypervisor detected: KVM
Oct 09 15:50:58 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 09 15:50:58 localhost kernel: kvm-clock: using sched offset of 2241525101734 cycles
Oct 09 15:50:58 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 09 15:50:58 localhost kernel: tsc: Detected 2800.000 MHz processor
Oct 09 15:50:58 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 09 15:50:58 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 09 15:50:58 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 09 15:50:58 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 09 15:50:58 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 09 15:50:58 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 09 15:50:58 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 09 15:50:58 localhost kernel: Using GB pages for direct mapping
Oct 09 15:50:58 localhost kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct 09 15:50:58 localhost kernel: ACPI: Early table checksum verification disabled
Oct 09 15:50:58 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 09 15:50:58 localhost kernel: ACPI: RSDT 0x00000000BFFE16C4 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 15:50:58 localhost kernel: ACPI: FACP 0x00000000BFFE1578 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 15:50:58 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F8 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 15:50:58 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 09 15:50:58 localhost kernel: ACPI: APIC 0x00000000BFFE15EC 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 15:50:58 localhost kernel: ACPI: WAET 0x00000000BFFE169C 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 15:50:58 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1578-0xbffe15eb]
Oct 09 15:50:58 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1577]
Oct 09 15:50:58 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 09 15:50:58 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15ec-0xbffe169b]
Oct 09 15:50:58 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe169c-0xbffe16c3]
Oct 09 15:50:58 localhost kernel: No NUMA configuration found
Oct 09 15:50:58 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 09 15:50:58 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct 09 15:50:58 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 09 15:50:58 localhost kernel: Zone ranges:
Oct 09 15:50:58 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 09 15:50:58 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 09 15:50:58 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 09 15:50:58 localhost kernel:   Device   empty
Oct 09 15:50:58 localhost kernel: Movable zone start for each node
Oct 09 15:50:58 localhost kernel: Early memory node ranges
Oct 09 15:50:58 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 09 15:50:58 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 09 15:50:58 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 09 15:50:58 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 09 15:50:58 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 09 15:50:58 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 09 15:50:58 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 09 15:50:58 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 09 15:50:58 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 09 15:50:58 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 09 15:50:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 09 15:50:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 09 15:50:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 09 15:50:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 09 15:50:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 09 15:50:58 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 09 15:50:58 localhost kernel: TSC deadline timer available
Oct 09 15:50:58 localhost kernel: CPU topo: Max. logical packages:   8
Oct 09 15:50:58 localhost kernel: CPU topo: Max. logical dies:       8
Oct 09 15:50:58 localhost kernel: CPU topo: Max. dies per package:   1
Oct 09 15:50:58 localhost kernel: CPU topo: Max. threads per core:   1
Oct 09 15:50:58 localhost kernel: CPU topo: Num. cores per package:     1
Oct 09 15:50:58 localhost kernel: CPU topo: Num. threads per package:   1
Oct 09 15:50:58 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 09 15:50:58 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 09 15:50:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 09 15:50:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 09 15:50:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 09 15:50:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 09 15:50:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 09 15:50:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 09 15:50:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 09 15:50:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 09 15:50:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 09 15:50:58 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 09 15:50:58 localhost kernel: Booting paravirtualized kernel on KVM
Oct 09 15:50:58 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 09 15:50:58 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 09 15:50:58 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 09 15:50:58 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Oct 09 15:50:58 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 09 15:50:58 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 09 15:50:58 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 09 15:50:58 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct 09 15:50:58 localhost kernel: random: crng init done
Oct 09 15:50:58 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 09 15:50:58 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 09 15:50:58 localhost kernel: Fallback order for Node 0: 0 
Oct 09 15:50:58 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 09 15:50:58 localhost kernel: Policy zone: Normal
Oct 09 15:50:58 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 09 15:50:58 localhost kernel: software IO TLB: area num 8.
Oct 09 15:50:58 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 09 15:50:58 localhost kernel: ftrace: allocating 49370 entries in 193 pages
Oct 09 15:50:58 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 09 15:50:58 localhost kernel: Dynamic Preempt: voluntary
Oct 09 15:50:58 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 09 15:50:58 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 09 15:50:58 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 09 15:50:58 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 09 15:50:58 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 09 15:50:58 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 09 15:50:58 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 09 15:50:58 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 09 15:50:58 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 09 15:50:58 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 09 15:50:58 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 09 15:50:58 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 09 15:50:58 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 09 15:50:58 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 09 15:50:58 localhost kernel: Console: colour VGA+ 80x25
Oct 09 15:50:58 localhost kernel: printk: console [ttyS0] enabled
Oct 09 15:50:58 localhost kernel: ACPI: Core revision 20230331
Oct 09 15:50:58 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 09 15:50:58 localhost kernel: x2apic enabled
Oct 09 15:50:58 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 09 15:50:58 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 09 15:50:58 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct 09 15:50:58 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 09 15:50:58 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 09 15:50:58 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 09 15:50:58 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 09 15:50:58 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 09 15:50:58 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 09 15:50:58 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 09 15:50:58 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 09 15:50:58 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 09 15:50:58 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 09 15:50:58 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 09 15:50:58 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 09 15:50:58 localhost kernel: x86/bugs: return thunk changed
Oct 09 15:50:58 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 09 15:50:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 09 15:50:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 09 15:50:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 09 15:50:58 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 09 15:50:58 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 09 15:50:58 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 09 15:50:58 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 09 15:50:58 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 09 15:50:58 localhost kernel: landlock: Up and running.
Oct 09 15:50:58 localhost kernel: Yama: becoming mindful.
Oct 09 15:50:58 localhost kernel: SELinux:  Initializing.
Oct 09 15:50:58 localhost kernel: LSM support for eBPF active
Oct 09 15:50:58 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 09 15:50:58 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 09 15:50:58 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 09 15:50:58 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 09 15:50:58 localhost kernel: ... version:                0
Oct 09 15:50:58 localhost kernel: ... bit width:              48
Oct 09 15:50:58 localhost kernel: ... generic registers:      6
Oct 09 15:50:58 localhost kernel: ... value mask:             0000ffffffffffff
Oct 09 15:50:58 localhost kernel: ... max period:             00007fffffffffff
Oct 09 15:50:58 localhost kernel: ... fixed-purpose events:   0
Oct 09 15:50:58 localhost kernel: ... event mask:             000000000000003f
Oct 09 15:50:58 localhost kernel: signal: max sigframe size: 1776
Oct 09 15:50:58 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 09 15:50:58 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 09 15:50:58 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 09 15:50:58 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 09 15:50:58 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 09 15:50:58 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 09 15:50:58 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct 09 15:50:58 localhost kernel: node 0 deferred pages initialised in 26ms
Oct 09 15:50:58 localhost kernel: Memory: 7765604K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616504K reserved, 0K cma-reserved)
Oct 09 15:50:58 localhost kernel: devtmpfs: initialized
Oct 09 15:50:58 localhost kernel: x86/mm: Memory block size: 128MB
Oct 09 15:50:58 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 09 15:50:58 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 09 15:50:58 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 09 15:50:58 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 09 15:50:58 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 09 15:50:58 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 09 15:50:58 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 09 15:50:58 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 09 15:50:58 localhost kernel: audit: type=2000 audit(1760025055.811:1): state=initialized audit_enabled=0 res=1
Oct 09 15:50:58 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 09 15:50:58 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 09 15:50:58 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 09 15:50:58 localhost kernel: cpuidle: using governor menu
Oct 09 15:50:58 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 09 15:50:58 localhost kernel: PCI: Using configuration type 1 for base access
Oct 09 15:50:58 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 09 15:50:58 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 09 15:50:58 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 09 15:50:58 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 09 15:50:58 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 09 15:50:58 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 09 15:50:58 localhost kernel: Demotion targets for Node 0: null
Oct 09 15:50:58 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 09 15:50:58 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 09 15:50:58 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 09 15:50:58 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 09 15:50:58 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 09 15:50:58 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 09 15:50:58 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 09 15:50:58 localhost kernel: ACPI: Interpreter enabled
Oct 09 15:50:58 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 09 15:50:58 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 09 15:50:58 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 09 15:50:58 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 09 15:50:58 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 09 15:50:58 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 09 15:50:58 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [3] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [4] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [5] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [6] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [7] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [8] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [9] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [10] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [11] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [12] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [13] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [14] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [15] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [16] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [17] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [18] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [19] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [20] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [21] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [22] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [23] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [24] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [25] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [26] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [27] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [28] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [29] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [30] registered
Oct 09 15:50:58 localhost kernel: acpiphp: Slot [31] registered
Oct 09 15:50:58 localhost kernel: PCI host bridge to bus 0000:00
Oct 09 15:50:58 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 09 15:50:58 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 09 15:50:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 09 15:50:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 09 15:50:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 09 15:50:58 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 09 15:50:58 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 09 15:50:58 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 09 15:50:58 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 09 15:50:58 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc180-0xc18f]
Oct 09 15:50:58 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 09 15:50:58 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 09 15:50:58 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 09 15:50:58 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 09 15:50:58 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 09 15:50:58 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc140-0xc15f]
Oct 09 15:50:58 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 09 15:50:58 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 09 15:50:58 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 09 15:50:58 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 09 15:50:58 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 09 15:50:58 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 09 15:50:58 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 09 15:50:58 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 09 15:50:58 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 09 15:50:58 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 09 15:50:58 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 09 15:50:58 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 09 15:50:58 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 09 15:50:58 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfea80000-0xfeafffff pref]
Oct 09 15:50:58 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 09 15:50:58 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 09 15:50:58 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 09 15:50:58 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 09 15:50:58 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 09 15:50:58 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 09 15:50:58 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 09 15:50:58 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 09 15:50:58 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc160-0xc17f]
Oct 09 15:50:58 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 09 15:50:58 localhost kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 09 15:50:58 localhost kernel: pci 0000:00:07.0: BAR 0 [io  0xc100-0xc13f]
Oct 09 15:50:58 localhost kernel: pci 0000:00:07.0: BAR 1 [mem 0xfeb93000-0xfeb93fff]
Oct 09 15:50:58 localhost kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref]
Oct 09 15:50:58 localhost kernel: pci 0000:00:07.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 09 15:50:58 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 09 15:50:58 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 09 15:50:58 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 09 15:50:58 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 09 15:50:58 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 09 15:50:58 localhost kernel: iommu: Default domain type: Translated
Oct 09 15:50:58 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 09 15:50:58 localhost kernel: SCSI subsystem initialized
Oct 09 15:50:58 localhost kernel: ACPI: bus type USB registered
Oct 09 15:50:58 localhost kernel: usbcore: registered new interface driver usbfs
Oct 09 15:50:58 localhost kernel: usbcore: registered new interface driver hub
Oct 09 15:50:58 localhost kernel: usbcore: registered new device driver usb
Oct 09 15:50:58 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 09 15:50:58 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 09 15:50:58 localhost kernel: PTP clock support registered
Oct 09 15:50:58 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 09 15:50:58 localhost kernel: NetLabel: Initializing
Oct 09 15:50:58 localhost kernel: NetLabel:  domain hash size = 128
Oct 09 15:50:58 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 09 15:50:58 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 09 15:50:58 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 09 15:50:58 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 09 15:50:58 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 09 15:50:58 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 09 15:50:58 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 09 15:50:58 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 09 15:50:58 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 09 15:50:58 localhost kernel: vgaarb: loaded
Oct 09 15:50:58 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 09 15:50:58 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 09 15:50:58 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 09 15:50:58 localhost kernel: pnp: PnP ACPI init
Oct 09 15:50:58 localhost kernel: pnp 00:03: [dma 2]
Oct 09 15:50:58 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 09 15:50:58 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 09 15:50:58 localhost kernel: NET: Registered PF_INET protocol family
Oct 09 15:50:58 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 09 15:50:58 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 09 15:50:58 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 09 15:50:58 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 09 15:50:58 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 09 15:50:58 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 09 15:50:58 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 09 15:50:58 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 09 15:50:58 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 09 15:50:58 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 09 15:50:58 localhost kernel: NET: Registered PF_XDP protocol family
Oct 09 15:50:58 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 09 15:50:58 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 09 15:50:58 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 09 15:50:58 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 09 15:50:58 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 09 15:50:58 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 09 15:50:58 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 09 15:50:58 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 09 15:50:58 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 75218 usecs
Oct 09 15:50:58 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 09 15:50:58 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 09 15:50:58 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 09 15:50:58 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 09 15:50:58 localhost kernel: ACPI: bus type thunderbolt registered
Oct 09 15:50:58 localhost kernel: Initialise system trusted keyrings
Oct 09 15:50:58 localhost kernel: Key type blacklist registered
Oct 09 15:50:58 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 09 15:50:58 localhost kernel: zbud: loaded
Oct 09 15:50:58 localhost kernel: integrity: Platform Keyring initialized
Oct 09 15:50:58 localhost kernel: integrity: Machine keyring initialized
Oct 09 15:50:58 localhost kernel: Freeing initrd memory: 86104K
Oct 09 15:50:58 localhost kernel: NET: Registered PF_ALG protocol family
Oct 09 15:50:58 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 09 15:50:58 localhost kernel: Key type asymmetric registered
Oct 09 15:50:58 localhost kernel: Asymmetric key parser 'x509' registered
Oct 09 15:50:58 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 09 15:50:58 localhost kernel: io scheduler mq-deadline registered
Oct 09 15:50:58 localhost kernel: io scheduler kyber registered
Oct 09 15:50:58 localhost kernel: io scheduler bfq registered
Oct 09 15:50:58 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 09 15:50:58 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 09 15:50:58 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 09 15:50:58 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 09 15:50:58 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 09 15:50:58 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 09 15:50:58 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 09 15:50:58 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 09 15:50:58 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 09 15:50:58 localhost kernel: Non-volatile memory driver v1.3
Oct 09 15:50:58 localhost kernel: rdac: device handler registered
Oct 09 15:50:58 localhost kernel: hp_sw: device handler registered
Oct 09 15:50:58 localhost kernel: emc: device handler registered
Oct 09 15:50:58 localhost kernel: alua: device handler registered
Oct 09 15:50:58 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 09 15:50:58 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 09 15:50:58 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 09 15:50:58 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c140
Oct 09 15:50:58 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 09 15:50:58 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 09 15:50:58 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 09 15:50:58 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct 09 15:50:58 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 09 15:50:58 localhost kernel: hub 1-0:1.0: USB hub found
Oct 09 15:50:58 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 09 15:50:58 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 09 15:50:58 localhost kernel: usbserial: USB Serial support registered for generic
Oct 09 15:50:58 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 09 15:50:58 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 09 15:50:58 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 09 15:50:58 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 09 15:50:58 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 09 15:50:58 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 09 15:50:58 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 09 15:50:58 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 09 15:50:58 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-09T15:50:57 UTC (1760025057)
Oct 09 15:50:58 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 09 15:50:58 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 09 15:50:58 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 09 15:50:58 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 09 15:50:58 localhost kernel: usbcore: registered new interface driver usbhid
Oct 09 15:50:58 localhost kernel: usbhid: USB HID core driver
Oct 09 15:50:58 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 09 15:50:58 localhost kernel: Initializing XFRM netlink socket
Oct 09 15:50:58 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 09 15:50:58 localhost kernel: Segment Routing with IPv6
Oct 09 15:50:58 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 09 15:50:58 localhost kernel: mpls_gso: MPLS GSO support
Oct 09 15:50:58 localhost kernel: IPI shorthand broadcast: enabled
Oct 09 15:50:58 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 09 15:50:58 localhost kernel: AES CTR mode by8 optimization enabled
Oct 09 15:50:58 localhost kernel: sched_clock: Marking stable (1187002657, 144498049)->(1446604974, -115104268)
Oct 09 15:50:58 localhost kernel: registered taskstats version 1
Oct 09 15:50:58 localhost kernel: Loading compiled-in X.509 certificates
Oct 09 15:50:58 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 09 15:50:58 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 09 15:50:58 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 09 15:50:58 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 09 15:50:58 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 09 15:50:58 localhost kernel: Demotion targets for Node 0: null
Oct 09 15:50:58 localhost kernel: page_owner is disabled
Oct 09 15:50:58 localhost kernel: Key type .fscrypt registered
Oct 09 15:50:58 localhost kernel: Key type fscrypt-provisioning registered
Oct 09 15:50:58 localhost kernel: Key type big_key registered
Oct 09 15:50:58 localhost kernel: Key type encrypted registered
Oct 09 15:50:58 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 09 15:50:58 localhost kernel: Loading compiled-in module X.509 certificates
Oct 09 15:50:58 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 09 15:50:58 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 09 15:50:58 localhost kernel: ima: No architecture policies found
Oct 09 15:50:58 localhost kernel: evm: Initialising EVM extended attributes:
Oct 09 15:50:58 localhost kernel: evm: security.selinux
Oct 09 15:50:58 localhost kernel: evm: security.SMACK64 (disabled)
Oct 09 15:50:58 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 09 15:50:58 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 09 15:50:58 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 09 15:50:58 localhost kernel: evm: security.apparmor (disabled)
Oct 09 15:50:58 localhost kernel: evm: security.ima
Oct 09 15:50:58 localhost kernel: evm: security.capability
Oct 09 15:50:58 localhost kernel: evm: HMAC attrs: 0x1
Oct 09 15:50:58 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 09 15:50:58 localhost kernel: Running certificate verification RSA selftest
Oct 09 15:50:58 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 09 15:50:58 localhost kernel: Running certificate verification ECDSA selftest
Oct 09 15:50:58 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 09 15:50:58 localhost kernel: clk: Disabling unused clocks
Oct 09 15:50:58 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 09 15:50:58 localhost kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct 09 15:50:58 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 09 15:50:58 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct 09 15:50:58 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 09 15:50:58 localhost kernel: Run /init as init process
Oct 09 15:50:58 localhost kernel:   with arguments:
Oct 09 15:50:58 localhost kernel:     /init
Oct 09 15:50:58 localhost kernel:   with environment:
Oct 09 15:50:58 localhost kernel:     HOME=/
Oct 09 15:50:58 localhost kernel:     TERM=linux
Oct 09 15:50:58 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64
Oct 09 15:50:58 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 09 15:50:58 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 09 15:50:58 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 09 15:50:58 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 09 15:50:58 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 09 15:50:58 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 09 15:50:58 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 09 15:50:58 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 09 15:50:58 localhost systemd[1]: Detected virtualization kvm.
Oct 09 15:50:58 localhost systemd[1]: Detected architecture x86-64.
Oct 09 15:50:58 localhost systemd[1]: Running in initrd.
Oct 09 15:50:58 localhost systemd[1]: No hostname configured, using default hostname.
Oct 09 15:50:58 localhost systemd[1]: Hostname set to <localhost>.
Oct 09 15:50:58 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 09 15:50:58 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 09 15:50:58 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 09 15:50:58 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 09 15:50:58 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 09 15:50:58 localhost systemd[1]: Reached target Local File Systems.
Oct 09 15:50:58 localhost systemd[1]: Reached target Path Units.
Oct 09 15:50:58 localhost systemd[1]: Reached target Slice Units.
Oct 09 15:50:58 localhost systemd[1]: Reached target Swaps.
Oct 09 15:50:58 localhost systemd[1]: Reached target Timer Units.
Oct 09 15:50:58 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 09 15:50:58 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 09 15:50:58 localhost systemd[1]: Listening on Journal Socket.
Oct 09 15:50:58 localhost systemd[1]: Listening on udev Control Socket.
Oct 09 15:50:58 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 09 15:50:58 localhost systemd[1]: Reached target Socket Units.
Oct 09 15:50:58 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 09 15:50:58 localhost systemd[1]: Starting Journal Service...
Oct 09 15:50:58 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 09 15:50:58 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 09 15:50:58 localhost systemd[1]: Starting Create System Users...
Oct 09 15:50:58 localhost systemd[1]: Starting Setup Virtual Console...
Oct 09 15:50:58 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 09 15:50:58 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 09 15:50:58 localhost systemd-journald[309]: Journal started
Oct 09 15:50:58 localhost systemd-journald[309]: Runtime Journal (/run/log/journal/656dbd2758bc413da3c8085dadc82fd6) is 8.0M, max 153.5M, 145.5M free.
Oct 09 15:50:58 localhost systemd-sysusers[313]: Creating group 'users' with GID 100.
Oct 09 15:50:58 localhost systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Oct 09 15:50:58 localhost systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 09 15:50:58 localhost systemd[1]: Started Journal Service.
Oct 09 15:50:58 localhost systemd[1]: Finished Create System Users.
Oct 09 15:50:58 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 09 15:50:58 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 09 15:50:58 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 09 15:50:58 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 09 15:50:58 localhost systemd[1]: Finished Setup Virtual Console.
Oct 09 15:50:58 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 09 15:50:58 localhost systemd[1]: Starting dracut cmdline hook...
Oct 09 15:50:58 localhost dracut-cmdline[330]: dracut-9 dracut-057-102.git20250818.el9
Oct 09 15:50:58 localhost dracut-cmdline[330]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 09 15:50:58 localhost systemd[1]: Finished dracut cmdline hook.
Oct 09 15:50:58 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 09 15:50:58 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 09 15:50:58 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 09 15:50:58 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 09 15:50:58 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 09 15:50:58 localhost kernel: RPC: Registered udp transport module.
Oct 09 15:50:58 localhost kernel: RPC: Registered tcp transport module.
Oct 09 15:50:58 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 09 15:50:58 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 09 15:50:58 localhost rpc.statd[446]: Version 2.5.4 starting
Oct 09 15:50:58 localhost rpc.statd[446]: Initializing NSM state
Oct 09 15:50:58 localhost rpc.idmapd[451]: Setting log level to 0
Oct 09 15:50:58 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 09 15:50:58 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 09 15:50:58 localhost systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Oct 09 15:50:58 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 09 15:50:58 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 09 15:50:58 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 09 15:50:58 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 09 15:50:58 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 09 15:50:58 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 09 15:50:58 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 09 15:50:58 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 09 15:50:58 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 09 15:50:58 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 09 15:50:58 localhost systemd[1]: Reached target Network.
Oct 09 15:50:58 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 09 15:50:58 localhost systemd[1]: Starting dracut initqueue hook...
Oct 09 15:50:58 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 09 15:50:58 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 09 15:50:59 localhost kernel: libata version 3.00 loaded.
Oct 09 15:50:59 localhost kernel:  vda: vda1
Oct 09 15:50:59 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 09 15:50:59 localhost kernel: scsi host0: ata_piix
Oct 09 15:50:59 localhost kernel: scsi host1: ata_piix
Oct 09 15:50:59 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc180 irq 14 lpm-pol 0
Oct 09 15:50:59 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc188 irq 15 lpm-pol 0
Oct 09 15:50:59 localhost systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 09 15:50:59 localhost systemd[1]: Reached target Initrd Root Device.
Oct 09 15:50:59 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 09 15:50:59 localhost kernel: ata1: found unknown device (class 0)
Oct 09 15:50:59 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 09 15:50:59 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 09 15:50:59 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 09 15:50:59 localhost systemd[1]: Reached target System Initialization.
Oct 09 15:50:59 localhost systemd[1]: Reached target Basic System.
Oct 09 15:50:59 localhost systemd-udevd[476]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 15:50:59 localhost systemd-udevd[490]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 15:50:59 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 09 15:50:59 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 09 15:50:59 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 09 15:50:59 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 09 15:50:59 localhost systemd[1]: Finished dracut initqueue hook.
Oct 09 15:50:59 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 09 15:50:59 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 09 15:50:59 localhost systemd[1]: Reached target Remote File Systems.
Oct 09 15:50:59 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 09 15:50:59 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 09 15:50:59 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct 09 15:50:59 localhost systemd-fsck[559]: /usr/sbin/fsck.xfs: XFS file system.
Oct 09 15:50:59 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 09 15:50:59 localhost systemd[1]: Mounting /sysroot...
Oct 09 15:50:59 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 09 15:50:59 localhost kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct 09 15:50:59 localhost kernel: XFS (vda1): Ending clean mount
Oct 09 15:50:59 localhost systemd[1]: Mounted /sysroot.
Oct 09 15:50:59 localhost systemd[1]: Reached target Initrd Root File System.
Oct 09 15:50:59 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 09 15:50:59 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 09 15:50:59 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 09 15:50:59 localhost systemd[1]: Reached target Initrd File Systems.
Oct 09 15:50:59 localhost systemd[1]: Reached target Initrd Default Target.
Oct 09 15:50:59 localhost systemd[1]: Starting dracut mount hook...
Oct 09 15:50:59 localhost systemd[1]: Finished dracut mount hook.
Oct 09 15:51:00 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 09 15:51:00 localhost rpc.idmapd[451]: exiting on signal 15
Oct 09 15:51:00 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 09 15:51:00 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 09 15:51:00 localhost systemd[1]: Stopped target Network.
Oct 09 15:51:00 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 09 15:51:00 localhost systemd[1]: Stopped target Timer Units.
Oct 09 15:51:00 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 09 15:51:00 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 09 15:51:00 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 09 15:51:00 localhost systemd[1]: Stopped target Basic System.
Oct 09 15:51:00 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 09 15:51:00 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 09 15:51:00 localhost systemd[1]: Stopped target Path Units.
Oct 09 15:51:00 localhost systemd[1]: Stopped target Remote File Systems.
Oct 09 15:51:00 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 09 15:51:00 localhost systemd[1]: Stopped target Slice Units.
Oct 09 15:51:00 localhost systemd[1]: Stopped target Socket Units.
Oct 09 15:51:00 localhost systemd[1]: Stopped target System Initialization.
Oct 09 15:51:00 localhost systemd[1]: Stopped target Local File Systems.
Oct 09 15:51:00 localhost systemd[1]: Stopped target Swaps.
Oct 09 15:51:00 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Stopped dracut mount hook.
Oct 09 15:51:00 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 09 15:51:00 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 09 15:51:00 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 09 15:51:00 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 09 15:51:00 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 09 15:51:00 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 09 15:51:00 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 09 15:51:00 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 09 15:51:00 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 09 15:51:00 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 09 15:51:00 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 09 15:51:00 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 09 15:51:00 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Closed udev Control Socket.
Oct 09 15:51:00 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Closed udev Kernel Socket.
Oct 09 15:51:00 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 09 15:51:00 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 09 15:51:00 localhost systemd[1]: Starting Cleanup udev Database...
Oct 09 15:51:00 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 09 15:51:00 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 09 15:51:00 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Stopped Create System Users.
Oct 09 15:51:00 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 09 15:51:00 localhost systemd[1]: Finished Cleanup udev Database.
Oct 09 15:51:00 localhost systemd[1]: Reached target Switch Root.
Oct 09 15:51:00 localhost systemd[1]: Starting Switch Root...
Oct 09 15:51:00 localhost systemd[1]: Switching root.
Oct 09 15:51:00 localhost systemd-journald[309]: Journal stopped
Oct 09 15:51:01 compute-0 systemd-journald[309]: Received SIGTERM from PID 1 (systemd).
Oct 09 15:51:01 compute-0 kernel: audit: type=1404 audit(1760025060.392:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 09 15:51:01 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 09 15:51:01 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 09 15:51:01 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 09 15:51:01 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 09 15:51:01 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 09 15:51:01 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 09 15:51:01 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 09 15:51:01 compute-0 kernel: audit: type=1403 audit(1760025060.554:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 09 15:51:01 compute-0 systemd[1]: Successfully loaded SELinux policy in 166.863ms.
Oct 09 15:51:01 compute-0 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 38.394ms.
Oct 09 15:51:01 compute-0 systemd[1]: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 09 15:51:01 compute-0 systemd[1]: Detected virtualization kvm.
Oct 09 15:51:01 compute-0 systemd[1]: Detected architecture x86-64.
Oct 09 15:51:01 compute-0 systemd[1]: Hostname set to <compute-0>.
Oct 09 15:51:01 compute-0 systemd-rc-local-generator[638]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:51:01 compute-0 systemd-sysv-generator[641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:51:01 compute-0 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Oct 09 15:51:01 compute-0 systemd[1]: Stopped Switch Root.
Oct 09 15:51:01 compute-0 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 09 15:51:01 compute-0 systemd[1]: Created slice Slice /system/getty.
Oct 09 15:51:01 compute-0 systemd[1]: Created slice Slice /system/serial-getty.
Oct 09 15:51:01 compute-0 systemd[1]: Created slice Slice /system/sshd-keygen.
Oct 09 15:51:01 compute-0 systemd[1]: Created slice User and Session Slice.
Oct 09 15:51:01 compute-0 systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 09 15:51:01 compute-0 systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Oct 09 15:51:01 compute-0 systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 09 15:51:01 compute-0 systemd[1]: Reached target Local Encrypted Volumes.
Oct 09 15:51:01 compute-0 systemd[1]: Stopped target Switch Root.
Oct 09 15:51:01 compute-0 systemd[1]: Stopped target Initrd File Systems.
Oct 09 15:51:01 compute-0 systemd[1]: Stopped target Initrd Root File System.
Oct 09 15:51:01 compute-0 systemd[1]: Reached target Local Integrity Protected Volumes.
Oct 09 15:51:01 compute-0 systemd[1]: Reached target Path Units.
Oct 09 15:51:01 compute-0 systemd[1]: Reached target rpc_pipefs.target.
Oct 09 15:51:01 compute-0 systemd[1]: Reached target Slice Units.
Oct 09 15:51:01 compute-0 systemd[1]: Reached target Local Verity Protected Volumes.
Oct 09 15:51:01 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 09 15:51:01 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 09 15:51:01 compute-0 systemd[1]: Listening on RPCbind Server Activation Socket.
Oct 09 15:51:01 compute-0 systemd[1]: Reached target RPC Port Mapper.
Oct 09 15:51:01 compute-0 systemd[1]: Listening on Process Core Dump Socket.
Oct 09 15:51:01 compute-0 systemd[1]: Listening on initctl Compatibility Named Pipe.
Oct 09 15:51:01 compute-0 systemd[1]: Listening on udev Control Socket.
Oct 09 15:51:01 compute-0 systemd[1]: Listening on udev Kernel Socket.
Oct 09 15:51:01 compute-0 systemd[1]: Mounting Huge Pages File System...
Oct 09 15:51:01 compute-0 systemd[1]: Mounting /dev/hugepages1G...
Oct 09 15:51:01 compute-0 systemd[1]: Mounting /dev/hugepages2M...
Oct 09 15:51:01 compute-0 systemd[1]: Mounting POSIX Message Queue File System...
Oct 09 15:51:01 compute-0 systemd[1]: Mounting Kernel Debug File System...
Oct 09 15:51:01 compute-0 systemd[1]: Mounting Kernel Trace File System...
Oct 09 15:51:01 compute-0 systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 09 15:51:01 compute-0 systemd[1]: Starting Create List of Static Device Nodes...
Oct 09 15:51:01 compute-0 systemd[1]: Load legacy module configuration was skipped because no trigger condition checks were met.
Oct 09 15:51:01 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 09 15:51:01 compute-0 systemd[1]: Starting Load Kernel Module configfs...
Oct 09 15:51:01 compute-0 systemd[1]: Starting Load Kernel Module drm...
Oct 09 15:51:01 compute-0 systemd[1]: Starting Load Kernel Module efi_pstore...
Oct 09 15:51:01 compute-0 systemd[1]: Starting Load Kernel Module fuse...
Oct 09 15:51:01 compute-0 systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 09 15:51:01 compute-0 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Oct 09 15:51:01 compute-0 systemd[1]: Stopped File System Check on Root Device.
Oct 09 15:51:01 compute-0 systemd[1]: Stopped Journal Service.
Oct 09 15:51:01 compute-0 systemd[1]: Starting Journal Service...
Oct 09 15:51:01 compute-0 kernel: fuse: init (API version 7.37)
Oct 09 15:51:01 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 09 15:51:01 compute-0 systemd[1]: Starting Generate network units from Kernel command line...
Oct 09 15:51:01 compute-0 systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 09 15:51:01 compute-0 systemd[1]: Starting Remount Root and Kernel File Systems...
Oct 09 15:51:01 compute-0 systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 09 15:51:01 compute-0 systemd[1]: Starting Coldplug All udev Devices...
Oct 09 15:51:01 compute-0 systemd[1]: Mounted Huge Pages File System.
Oct 09 15:51:01 compute-0 systemd-journald[687]: Journal started
Oct 09 15:51:01 compute-0 systemd-journald[687]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct 09 15:51:00 compute-0 systemd[1]: Queued start job for default target Multi-User System.
Oct 09 15:51:00 compute-0 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 09 15:51:01 compute-0 systemd[1]: Started Journal Service.
Oct 09 15:51:01 compute-0 systemd[1]: Mounted /dev/hugepages1G.
Oct 09 15:51:01 compute-0 systemd[1]: Mounted /dev/hugepages2M.
Oct 09 15:51:01 compute-0 systemd[1]: Mounted POSIX Message Queue File System.
Oct 09 15:51:01 compute-0 systemd[1]: Mounted Kernel Debug File System.
Oct 09 15:51:01 compute-0 systemd[1]: Mounted Kernel Trace File System.
Oct 09 15:51:01 compute-0 systemd[1]: Finished Create List of Static Device Nodes.
Oct 09 15:51:01 compute-0 kernel: ACPI: bus type drm_connector registered
Oct 09 15:51:01 compute-0 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 09 15:51:01 compute-0 systemd[1]: Finished Load Kernel Module configfs.
Oct 09 15:51:01 compute-0 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 09 15:51:01 compute-0 systemd[1]: Finished Load Kernel Module drm.
Oct 09 15:51:01 compute-0 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 09 15:51:01 compute-0 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 09 15:51:01 compute-0 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 09 15:51:01 compute-0 systemd[1]: Finished Load Kernel Module fuse.
Oct 09 15:51:01 compute-0 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 09 15:51:01 compute-0 systemd[1]: Finished Generate network units from Kernel command line.
Oct 09 15:51:01 compute-0 systemd[1]: Mounting FUSE Control File System...
Oct 09 15:51:01 compute-0 systemd[1]: Mounted FUSE Control File System.
Oct 09 15:51:01 compute-0 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 09 15:51:01 compute-0 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 09 15:51:01 compute-0 systemd[1]: Activating swap /swap...
Oct 09 15:51:01 compute-0 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 09 15:51:01 compute-0 systemd[1]: Rebuild Hardware Database was skipped because of an unmet condition check (ConditionNeedsUpdate=/etc).
Oct 09 15:51:01 compute-0 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 09 15:51:01 compute-0 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 09 15:51:01 compute-0 systemd[1]: Starting Load/Save OS Random Seed...
Oct 09 15:51:01 compute-0 systemd[1]: Create System Users was skipped because no trigger condition checks were met.
Oct 09 15:51:01 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 09 15:51:01 compute-0 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 09 15:51:01 compute-0 systemd[1]: Activated swap /swap.
Oct 09 15:51:01 compute-0 systemd-journald[687]: Time spent on flushing to /var/log/journal/42833e1b511a402df82cb9cb2fc36491 is 8.300ms for 776 entries.
Oct 09 15:51:01 compute-0 systemd-journald[687]: System Journal (/var/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 4.0G, 3.9G free.
Oct 09 15:51:01 compute-0 systemd-journald[687]: Received client request to flush runtime journal.
Oct 09 15:51:01 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 09 15:51:01 compute-0 kernel: Bridge firewalling registered
Oct 09 15:51:01 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 09 15:51:01 compute-0 systemd[1]: Finished Load/Save OS Random Seed.
Oct 09 15:51:01 compute-0 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 09 15:51:01 compute-0 systemd[1]: Reached target Swaps.
Oct 09 15:51:01 compute-0 systemd-modules-load[688]: Inserted module 'br_netfilter'
Oct 09 15:51:01 compute-0 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 09 15:51:01 compute-0 systemd[1]: Finished Coldplug All udev Devices.
Oct 09 15:51:01 compute-0 systemd-modules-load[688]: Inserted module 'nf_conntrack'
Oct 09 15:51:01 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 09 15:51:01 compute-0 systemd[1]: Starting Apply Kernel Variables...
Oct 09 15:51:01 compute-0 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 09 15:51:01 compute-0 systemd[1]: Reached target Preparation for Local File Systems.
Oct 09 15:51:01 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 09 15:51:01 compute-0 systemd[1]: Reached target Local File Systems.
Oct 09 15:51:01 compute-0 systemd[1]: Starting Import network configuration from initramfs...
Oct 09 15:51:01 compute-0 systemd[1]: Rebuild Dynamic Linker Cache was skipped because no trigger condition checks were met.
Oct 09 15:51:01 compute-0 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 09 15:51:01 compute-0 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 09 15:51:01 compute-0 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 09 15:51:01 compute-0 systemd[1]: Starting Automatic Boot Loader Update...
Oct 09 15:51:01 compute-0 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 09 15:51:01 compute-0 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 09 15:51:01 compute-0 systemd[1]: Finished Apply Kernel Variables.
Oct 09 15:51:01 compute-0 bootctl[705]: Couldn't find EFI system partition, skipping.
Oct 09 15:51:01 compute-0 systemd[1]: Finished Automatic Boot Loader Update.
Oct 09 15:51:01 compute-0 systemd[1]: Finished Import network configuration from initramfs.
Oct 09 15:51:01 compute-0 systemd-udevd[707]: Using default interface naming scheme 'rhel-9.0'.
Oct 09 15:51:01 compute-0 systemd[1]: Starting Create Volatile Files and Directories...
Oct 09 15:51:01 compute-0 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 09 15:51:01 compute-0 systemd[1]: Starting Load Kernel Module configfs...
Oct 09 15:51:01 compute-0 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 09 15:51:01 compute-0 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 09 15:51:01 compute-0 systemd[1]: Finished Load Kernel Module configfs.
Oct 09 15:51:01 compute-0 systemd-udevd[727]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 15:51:01 compute-0 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 09 15:51:01 compute-0 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 09 15:51:01 compute-0 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 09 15:51:01 compute-0 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 09 15:51:01 compute-0 systemd[1]: Finished Create Volatile Files and Directories.
Oct 09 15:51:01 compute-0 systemd[1]: Starting Security Auditing Service...
Oct 09 15:51:01 compute-0 systemd[1]: Starting RPC Bind...
Oct 09 15:51:01 compute-0 systemd[1]: Rebuild Journal Catalog was skipped because of an unmet condition check (ConditionNeedsUpdate=/var).
Oct 09 15:51:01 compute-0 systemd[1]: Update is Completed was skipped because no trigger condition checks were met.
Oct 09 15:51:01 compute-0 systemd-udevd[734]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 15:51:01 compute-0 auditd[771]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 09 15:51:01 compute-0 auditd[771]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 09 15:51:01 compute-0 systemd[1]: Started RPC Bind.
Oct 09 15:51:01 compute-0 kernel: kvm_amd: TSC scaling supported
Oct 09 15:51:01 compute-0 kernel: kvm_amd: Nested Virtualization enabled
Oct 09 15:51:01 compute-0 kernel: kvm_amd: Nested Paging enabled
Oct 09 15:51:01 compute-0 kernel: kvm_amd: LBR virtualization supported
Oct 09 15:51:01 compute-0 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 09 15:51:01 compute-0 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 09 15:51:01 compute-0 kernel: Console: switching to colour dummy device 80x25
Oct 09 15:51:01 compute-0 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 09 15:51:01 compute-0 kernel: [drm] features: -context_init
Oct 09 15:51:01 compute-0 kernel: [drm] number of scanouts: 1
Oct 09 15:51:01 compute-0 kernel: [drm] number of cap sets: 0
Oct 09 15:51:01 compute-0 augenrules[777]: /sbin/augenrules: No change
Oct 09 15:51:01 compute-0 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 09 15:51:01 compute-0 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 09 15:51:01 compute-0 kernel: Console: switching to colour frame buffer device 128x48
Oct 09 15:51:01 compute-0 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 09 15:51:01 compute-0 augenrules[800]: No rules
Oct 09 15:51:01 compute-0 augenrules[800]: enabled 1
Oct 09 15:51:01 compute-0 augenrules[800]: failure 1
Oct 09 15:51:01 compute-0 augenrules[800]: pid 771
Oct 09 15:51:01 compute-0 augenrules[800]: rate_limit 0
Oct 09 15:51:01 compute-0 augenrules[800]: backlog_limit 8192
Oct 09 15:51:01 compute-0 augenrules[800]: lost 0
Oct 09 15:51:01 compute-0 augenrules[800]: backlog 3
Oct 09 15:51:01 compute-0 augenrules[800]: backlog_wait_time 60000
Oct 09 15:51:01 compute-0 augenrules[800]: backlog_wait_time_actual 0
Oct 09 15:51:01 compute-0 augenrules[800]: enabled 1
Oct 09 15:51:01 compute-0 augenrules[800]: failure 1
Oct 09 15:51:01 compute-0 augenrules[800]: pid 771
Oct 09 15:51:01 compute-0 augenrules[800]: rate_limit 0
Oct 09 15:51:01 compute-0 augenrules[800]: backlog_limit 8192
Oct 09 15:51:01 compute-0 augenrules[800]: lost 0
Oct 09 15:51:01 compute-0 augenrules[800]: backlog 4
Oct 09 15:51:01 compute-0 augenrules[800]: backlog_wait_time 60000
Oct 09 15:51:01 compute-0 augenrules[800]: backlog_wait_time_actual 0
Oct 09 15:51:01 compute-0 augenrules[800]: enabled 1
Oct 09 15:51:01 compute-0 augenrules[800]: failure 1
Oct 09 15:51:01 compute-0 augenrules[800]: pid 771
Oct 09 15:51:01 compute-0 augenrules[800]: rate_limit 0
Oct 09 15:51:01 compute-0 augenrules[800]: backlog_limit 8192
Oct 09 15:51:01 compute-0 augenrules[800]: lost 0
Oct 09 15:51:01 compute-0 augenrules[800]: backlog 8
Oct 09 15:51:01 compute-0 augenrules[800]: backlog_wait_time 60000
Oct 09 15:51:01 compute-0 augenrules[800]: backlog_wait_time_actual 0
Oct 09 15:51:01 compute-0 systemd[1]: Started Security Auditing Service.
Oct 09 15:51:01 compute-0 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 09 15:51:01 compute-0 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 09 15:51:02 compute-0 systemd[1]: Reached target System Initialization.
Oct 09 15:51:02 compute-0 systemd[1]: Started dnf makecache --timer.
Oct 09 15:51:02 compute-0 systemd[1]: Started Daily rotation of log files.
Oct 09 15:51:02 compute-0 systemd[1]: Started Run system activity accounting tool every 10 minutes.
Oct 09 15:51:02 compute-0 systemd[1]: Started Generate summary of yesterday's process accounting.
Oct 09 15:51:02 compute-0 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 09 15:51:02 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 09 15:51:02 compute-0 systemd[1]: Reached target Timer Units.
Oct 09 15:51:02 compute-0 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 09 15:51:02 compute-0 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 09 15:51:02 compute-0 systemd[1]: Reached target Socket Units.
Oct 09 15:51:02 compute-0 systemd[1]: Starting D-Bus System Message Bus...
Oct 09 15:51:02 compute-0 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 09 15:51:02 compute-0 systemd[1]: Started D-Bus System Message Bus.
Oct 09 15:51:02 compute-0 systemd[1]: Reached target Basic System.
Oct 09 15:51:02 compute-0 dbus-broker-lau[832]: Ready
Oct 09 15:51:02 compute-0 systemd[1]: Starting NTP client/server...
Oct 09 15:51:02 compute-0 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 09 15:51:02 compute-0 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 09 15:51:02 compute-0 systemd[1]: Started irqbalance daemon.
Oct 09 15:51:02 compute-0 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 09 15:51:02 compute-0 systemd[1]: Starting Create netns directory...
Oct 09 15:51:02 compute-0 systemd[1]: Starting Netfilter Tables...
Oct 09 15:51:02 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 15:51:02 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 15:51:02 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 15:51:02 compute-0 systemd[1]: Reached target sshd-keygen.target.
Oct 09 15:51:02 compute-0 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 09 15:51:02 compute-0 systemd[1]: Reached target User and Group Name Lookups.
Oct 09 15:51:02 compute-0 systemd[1]: Starting Resets System Activity Logs...
Oct 09 15:51:02 compute-0 systemd[1]: Starting User Login Management...
Oct 09 15:51:02 compute-0 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 09 15:51:02 compute-0 systemd[1]: Finished Resets System Activity Logs.
Oct 09 15:51:02 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 09 15:51:02 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 09 15:51:02 compute-0 systemd[1]: Finished Create netns directory.
Oct 09 15:51:02 compute-0 chronyd[847]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 09 15:51:02 compute-0 chronyd[847]: Frequency -32.388 +/- 0.604 ppm read from /var/lib/chrony/drift
Oct 09 15:51:02 compute-0 chronyd[847]: Loaded seccomp filter (level 2)
Oct 09 15:51:02 compute-0 systemd-logind[841]: New seat seat0.
Oct 09 15:51:02 compute-0 systemd-logind[841]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 09 15:51:02 compute-0 systemd-logind[841]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 09 15:51:02 compute-0 systemd[1]: Started NTP client/server.
Oct 09 15:51:02 compute-0 systemd[1]: Started User Login Management.
Oct 09 15:51:02 compute-0 systemd[1]: Finished Netfilter Tables.
Oct 09 15:51:02 compute-0 cloud-init[867]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 09 Oct 2025 15:51:02 +0000. Up 6.57 seconds.
Oct 09 15:51:03 compute-0 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 09 15:51:03 compute-0 systemd[1]: Reached target Preparation for Network.
Oct 09 15:51:03 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Oct 09 15:51:03 compute-0 chown[870]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 09 15:51:03 compute-0 ovs-ctl[875]: Starting ovsdb-server [  OK  ]
Oct 09 15:51:03 compute-0 ovs-vsctl[924]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 09 15:51:03 compute-0 ovs-vsctl[934]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"9954897f-aa83-45dd-8e84-289816676c2a\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 09 15:51:03 compute-0 ovs-ctl[875]: Configuring Open vSwitch system IDs [  OK  ]
Oct 09 15:51:03 compute-0 ovs-vsctl[939]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 09 15:51:03 compute-0 ovs-ctl[875]: Enabling remote OVSDB managers [  OK  ]
Oct 09 15:51:03 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Oct 09 15:51:03 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 09 15:51:03 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 09 15:51:03 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 09 15:51:03 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Oct 09 15:51:03 compute-0 ovs-ctl[984]: Inserting openvswitch module [  OK  ]
Oct 09 15:51:03 compute-0 kernel: ovs-system: entered promiscuous mode
Oct 09 15:51:03 compute-0 systemd-udevd[738]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 15:51:03 compute-0 kernel: Timeout policy base is empty
Oct 09 15:51:03 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 09 15:51:03 compute-0 kernel: vlan22: entered promiscuous mode
Oct 09 15:51:03 compute-0 systemd-udevd[722]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 15:51:03 compute-0 kernel: vlan20: entered promiscuous mode
Oct 09 15:51:03 compute-0 kernel: vlan21: entered promiscuous mode
Oct 09 15:51:03 compute-0 ovs-ctl[953]: Starting ovs-vswitchd [  OK  ]
Oct 09 15:51:03 compute-0 ovs-vsctl[1025]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 09 15:51:03 compute-0 ovs-ctl[953]: Enabling remote OVSDB managers [  OK  ]
Oct 09 15:51:03 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 09 15:51:03 compute-0 systemd[1]: Starting Open vSwitch...
Oct 09 15:51:03 compute-0 systemd[1]: Finished Open vSwitch.
Oct 09 15:51:03 compute-0 systemd[1]: Starting Network Manager...
Oct 09 15:51:03 compute-0 NetworkManager[1028]: <info>  [1760025063.9384] NetworkManager (version 1.54.1-1.el9) is starting... (boot:c81e391b-c8ef-402c-a80d-26cfabfb8ce0)
Oct 09 15:51:03 compute-0 NetworkManager[1028]: <info>  [1760025063.9392] Read config: /etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf
Oct 09 15:51:03 compute-0 NetworkManager[1028]: <info>  [1760025063.9522] manager[0x55d6d5f21040]: monitoring kernel firmware directory '/lib/firmware'.
Oct 09 15:51:03 compute-0 systemd[1]: Starting Hostname Service...
Oct 09 15:51:04 compute-0 systemd[1]: Started Hostname Service.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0306] hostname: hostname: using hostnamed
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0306] hostname: static hostname changed from (none) to "compute-0"
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0311] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0424] manager[0x55d6d5f21040]: rfkill: Wi-Fi hardware radio set enabled
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0424] manager[0x55d6d5f21040]: rfkill: WWAN hardware radio set enabled
Oct 09 15:51:04 compute-0 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0484] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0504] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0505] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0505] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0505] manager: Networking is enabled by state file
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0511] settings: Loaded settings plugin: keyfile (internal)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0546] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 09 15:51:04 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0627] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0652] dhcp: init: Using DHCP client 'internal'
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0654] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0665] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0675] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0681] device (lo): Activation: starting connection 'lo' (c942e5d5-2e6c-4711-93ed-c7a578a430d9)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0688] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0690] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0729] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/3)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0731] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0743] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/4)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0745] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0757] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/5)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0758] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0771] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/6)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0774] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 15:51:04 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0798] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0799] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0805] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0806] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0810] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0812] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0816] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0817] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0821] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/11)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0823] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0829] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/12)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0831] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0837] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/13)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0839] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 15:51:04 compute-0 systemd[1]: Started Network Manager.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0849] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0856] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0858] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0859] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0861] device (eth0): carrier: link connected
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0862] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0862] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0863] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0864] device (eth1): carrier: link connected
Oct 09 15:51:04 compute-0 systemd[1]: Reached target Network.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0869] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0874] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0878] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0882] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0885] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 15:51:04 compute-0 kernel: vlan21: left promiscuous mode
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0905] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0909] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0910] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0912] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0913] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0914] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0915] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0923] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0925] policy: auto-activating connection 'ci-private-network' (3de2f3e4-96ea-5906-9def-9316ae1ab5fa)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0926] policy: auto-activating connection 'vlan21-port' (0bb82e47-0723-4e2b-8243-39fd125847a6)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0926] policy: auto-activating connection 'vlan20-port' (16c35609-4f08-4c7e-81ea-65c716bdb76c)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0927] policy: auto-activating connection 'br-ex-port' (72368789-afd6-43b0-8790-3eb32e3c6759)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0927] policy: auto-activating connection 'eth1-port' (c187801f-f702-42fd-be1e-94ed37a9f893)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0928] policy: auto-activating connection 'br-ex-br' (c9eaccc1-bd5d-4226-b104-e64b0cd8c01f)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0929] policy: auto-activating connection 'vlan22-port' (f63ac56b-b648-4958-a9f0-340f0ce75aef)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0932] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0936] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0939] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0940] device (eth1): Activation: starting connection 'ci-private-network' (3de2f3e4-96ea-5906-9def-9316ae1ab5fa)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0942] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (0bb82e47-0723-4e2b-8243-39fd125847a6)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0944] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (16c35609-4f08-4c7e-81ea-65c716bdb76c)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0946] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (72368789-afd6-43b0-8790-3eb32e3c6759)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0948] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (c187801f-f702-42fd-be1e-94ed37a9f893)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0952] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (c9eaccc1-bd5d-4226-b104-e64b0cd8c01f)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0953] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (f63ac56b-b648-4958-a9f0-340f0ce75aef)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.0954] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 09 15:51:04 compute-0 systemd[1]: Starting Network Manager Wait Online...
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1009] device (lo): Activation: successful, device activated.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1015] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1016] manager: NetworkManager state is now CONNECTING
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1017] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1024] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1026] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1028] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1030] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1032] device (eth1): state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 15:51:04 compute-0 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1036] device (eth1): disconnecting for new activation request.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1036] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1038] device (br-ex)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1042] device (br-ex)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1043] device (eth1)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1048] device (eth1)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1049] device (vlan20)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1052] device (vlan20)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1053] device (vlan21)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1057] device (vlan21)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1058] device (vlan22)[Open vSwitch Port]: state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1061] device (vlan22)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1062] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1064] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1066] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1070] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1073] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 09 15:51:04 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1084] device (eth1): disconnecting for new activation request.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1087] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1091] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1100] device (eth1): Activation: starting connection 'ci-private-network' (3de2f3e4-96ea-5906-9def-9316ae1ab5fa)
Oct 09 15:51:04 compute-0 kernel: vlan20: left promiscuous mode
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1104] device (br-ex)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1108] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (72368789-afd6-43b0-8790-3eb32e3c6759)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1120] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1124] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1128] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1129] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1130] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1133] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1138] device (eth1)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1142] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (c187801f-f702-42fd-be1e-94ed37a9f893)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1144] device (vlan20)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1148] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (16c35609-4f08-4c7e-81ea-65c716bdb76c)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1149] device (vlan21)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1153] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (0bb82e47-0723-4e2b-8243-39fd125847a6)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1155] device (vlan22)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1158] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (f63ac56b-b648-4958-a9f0-340f0ce75aef)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1162] dhcp4 (eth0): state changed new lease, address=38.102.83.110
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1165] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1171] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1175] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1176] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1177] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1179] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1181] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1183] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1185] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1186] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1189] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1190] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1192] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1193] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1196] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1198] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1199] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1207] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 09 15:51:04 compute-0 kernel: virtio_net virtio5 eth1: left promiscuous mode
Oct 09 15:51:04 compute-0 kernel: vlan22: left promiscuous mode
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1251] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1286] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1296] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1302] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1308] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1315] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1327] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1336] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 systemd[1]: Started GSSAPI Proxy Daemon.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1349] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1357] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1362] policy: auto-activating connection 'vlan20-if' (4801f3f1-b086-4044-9c96-6176f9fd5dad)
Oct 09 15:51:04 compute-0 kernel: ovs-system: left promiscuous mode
Oct 09 15:51:04 compute-0 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1367] policy: auto-activating connection 'vlan21-if' (36c4cc2e-8a31-445f-b45f-cdce710c4578)
Oct 09 15:51:04 compute-0 systemd[1]: Reached target NFS client services.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1370] policy: auto-activating connection 'vlan22-if' (62c58123-d944-4924-98de-47fd6223cda1)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1373] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 systemd[1]: Reached target Preparation for Remote File Systems.
Oct 09 15:51:04 compute-0 systemd[1]: Reached target Remote File Systems.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1416] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1426] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1431] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1437] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (4801f3f1-b086-4044-9c96-6176f9fd5dad)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1440] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1445] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1450] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (36c4cc2e-8a31-445f-b45f-cdce710c4578)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1453] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1458] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1464] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (62c58123-d944-4924-98de-47fd6223cda1)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1465] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1468] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1474] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1478] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1481] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1483] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1486] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1488] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1492] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1496] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1502] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1507] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1510] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1518] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1523] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1526] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1532] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1537] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1540] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1545] policy: auto-activating connection 'br-ex-if' (8bb31f04-e6ed-45a6-9ff3-4f7d45ebbd7c)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1548] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1552] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1557] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1564] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1568] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1572] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1576] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1579] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1582] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1587] device (eth0): Activation: successful, device activated.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1592] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 09 15:51:04 compute-0 kernel: ovs-system: entered promiscuous mode
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1598] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1604] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (8bb31f04-e6ed-45a6-9ff3-4f7d45ebbd7c)
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1605] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1608] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1610] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1613] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 kernel: No such timeout policy "ovs_test_tp"
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1615] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1620] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1632] device (eth1): Activation: successful, device activated.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1642] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1677] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 09 15:51:04 compute-0 kernel: vlan20: entered promiscuous mode
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1786] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1798] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1818] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1819] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1824] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 09 15:51:04 compute-0 kernel: br-ex: entered promiscuous mode
Oct 09 15:51:04 compute-0 kernel: vlan22: entered promiscuous mode
Oct 09 15:51:04 compute-0 kernel: vlan21: entered promiscuous mode
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1978] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.1987] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.2020] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.2021] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.2023] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.2027] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.2037] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.2065] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.2066] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.2075] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.2083] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.2091] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.2115] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.2116] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.2121] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 09 15:51:04 compute-0 NetworkManager[1028]: <info>  [1760025064.2126] manager: startup complete
Oct 09 15:51:04 compute-0 systemd[1]: Finished Network Manager Wait Online.
Oct 09 15:51:04 compute-0 systemd[1]: Starting Cloud-init: Network Stage...
Oct 09 15:51:04 compute-0 systemd[1]: Starting Authorization Manager...
Oct 09 15:51:04 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 09 15:51:04 compute-0 polkitd[1187]: Started polkitd version 0.117
Oct 09 15:51:04 compute-0 polkitd[1187]: Loading rules from directory /etc/polkit-1/rules.d
Oct 09 15:51:04 compute-0 polkitd[1187]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 09 15:51:04 compute-0 polkitd[1187]: Finished loading, compiling and executing 3 rules
Oct 09 15:51:04 compute-0 polkitd[1187]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Oct 09 15:51:04 compute-0 systemd[1]: Started Authorization Manager.
Oct 09 15:51:04 compute-0 cloud-init[1248]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 09 Oct 2025 15:51:04 +0000. Up 8.17 seconds.
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: +++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: |   Device   |   Up  |     Address     |      Mask     | Scope  |     Hw-Address    |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: |   br-ex    |  True | 192.168.122.100 | 255.255.255.0 | global | fa:16:3e:58:15:49 |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: |    eth0    |  True |  38.102.83.110  | 255.255.255.0 | global | fa:16:3e:3a:81:b8 |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: |    eth1    |  True |        .        |       .       |   .    | fa:16:3e:58:15:49 |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: |     lo     |  True |    127.0.0.1    |   255.0.0.0   |  host  |         .         |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: |     lo     |  True |     ::1/128     |       .       |  host  |         .         |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: | ovs-system | False |        .        |       .       |   .    | de:49:71:f0:1f:69 |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: |   vlan20   |  True |   172.17.0.101  | 255.255.255.0 | global | c2:40:fe:21:c4:7d |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: |   vlan21   |  True |   172.18.0.101  | 255.255.255.0 | global | 12:f0:f4:60:49:4a |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: |   vlan22   |  True |   172.19.0.101  | 255.255.255.0 | global | ea:ba:9c:c5:9d:13 |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: |   3   |    172.17.0.0   |    0.0.0.0    |  255.255.255.0  |   vlan20  |   U   |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: |   4   |    172.18.0.0   |    0.0.0.0    |  255.255.255.0  |   vlan21  |   U   |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: |   5   |    172.19.0.0   |    0.0.0.0    |  255.255.255.0  |   vlan22  |   U   |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: |   6   |  192.168.122.0  |    0.0.0.0    |  255.255.255.0  |   br-ex   |   U   |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: |   2   |  multicast  |    ::   |    eth1   |   U   |
Oct 09 15:51:04 compute-0 cloud-init[1248]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 09 15:51:04 compute-0 systemd[1]: Finished Cloud-init: Network Stage.
Oct 09 15:51:04 compute-0 systemd[1]: Reached target Cloud-config availability.
Oct 09 15:51:04 compute-0 systemd[1]: Reached target Network is Online.
Oct 09 15:51:04 compute-0 systemd[1]: Starting Cloud-init: Config Stage...
Oct 09 15:51:04 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Oct 09 15:51:04 compute-0 systemd[1]: Starting Notify NFS peers of a restart...
Oct 09 15:51:04 compute-0 systemd[1]: Starting System Logging Service...
Oct 09 15:51:04 compute-0 systemd[1]: Starting OpenSSH server daemon...
Oct 09 15:51:04 compute-0 sm-notify[1281]: Version 2.5.4 starting
Oct 09 15:51:04 compute-0 systemd[1]: Starting Permit User Sessions...
Oct 09 15:51:04 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Oct 09 15:51:04 compute-0 systemd[1]: Started Notify NFS peers of a restart.
Oct 09 15:51:04 compute-0 systemd[1]: Finished Permit User Sessions.
Oct 09 15:51:04 compute-0 sshd[1283]: Server listening on 0.0.0.0 port 22.
Oct 09 15:51:04 compute-0 sshd[1283]: Server listening on :: port 22.
Oct 09 15:51:04 compute-0 systemd[1]: Started Command Scheduler.
Oct 09 15:51:04 compute-0 systemd[1]: Started Getty on tty1.
Oct 09 15:51:04 compute-0 crond[1285]: (CRON) STARTUP (1.5.7)
Oct 09 15:51:04 compute-0 crond[1285]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 09 15:51:04 compute-0 systemd[1]: Started Serial Getty on ttyS0.
Oct 09 15:51:04 compute-0 crond[1285]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 63% if used.)
Oct 09 15:51:04 compute-0 crond[1285]: (CRON) INFO (running with inotify support)
Oct 09 15:51:04 compute-0 systemd[1]: Reached target Login Prompts.
Oct 09 15:51:04 compute-0 systemd[1]: Started OpenSSH server daemon.
Oct 09 15:51:04 compute-0 rsyslogd[1282]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1282" x-info="https://www.rsyslog.com"] start
Oct 09 15:51:04 compute-0 systemd[1]: Started System Logging Service.
Oct 09 15:51:04 compute-0 systemd[1]: Reached target Multi-User System.
Oct 09 15:51:04 compute-0 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 09 15:51:04 compute-0 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 09 15:51:04 compute-0 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 09 15:51:05 compute-0 rsyslogd[1282]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 15:51:05 compute-0 cloud-init[1295]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 09 Oct 2025 15:51:05 +0000. Up 8.74 seconds.
Oct 09 15:51:05 compute-0 systemd[1]: Finished Cloud-init: Config Stage.
Oct 09 15:51:05 compute-0 systemd[1]: Starting Cloud-init: Final Stage...
Oct 09 15:51:05 compute-0 cloud-init[1299]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 09 Oct 2025 15:51:05 +0000. Up 9.14 seconds.
Oct 09 15:51:05 compute-0 cloud-init[1299]: Cloud-init v. 24.4-7.el9 finished at Thu, 09 Oct 2025 15:51:05 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.20 seconds
Oct 09 15:51:05 compute-0 systemd[1]: Finished Cloud-init: Final Stage.
Oct 09 15:51:05 compute-0 systemd[1]: Reached target Cloud-init target.
Oct 09 15:51:05 compute-0 systemd[1]: Startup finished in 1.552s (kernel) + 2.487s (initrd) + 5.227s (userspace) = 9.266s.
Oct 09 15:51:12 compute-0 irqbalance[837]: Cannot change IRQ 25 affinity: Operation not permitted
Oct 09 15:51:12 compute-0 irqbalance[837]: IRQ 25 affinity is now unmanaged
Oct 09 15:51:12 compute-0 irqbalance[837]: Cannot change IRQ 31 affinity: Operation not permitted
Oct 09 15:51:12 compute-0 irqbalance[837]: IRQ 31 affinity is now unmanaged
Oct 09 15:51:12 compute-0 irqbalance[837]: Cannot change IRQ 28 affinity: Operation not permitted
Oct 09 15:51:12 compute-0 irqbalance[837]: IRQ 28 affinity is now unmanaged
Oct 09 15:51:12 compute-0 irqbalance[837]: Cannot change IRQ 26 affinity: Operation not permitted
Oct 09 15:51:12 compute-0 irqbalance[837]: IRQ 26 affinity is now unmanaged
Oct 09 15:51:12 compute-0 irqbalance[837]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 09 15:51:12 compute-0 irqbalance[837]: IRQ 32 affinity is now unmanaged
Oct 09 15:51:12 compute-0 irqbalance[837]: Cannot change IRQ 30 affinity: Operation not permitted
Oct 09 15:51:12 compute-0 irqbalance[837]: IRQ 30 affinity is now unmanaged
Oct 09 15:51:12 compute-0 irqbalance[837]: Cannot change IRQ 29 affinity: Operation not permitted
Oct 09 15:51:12 compute-0 irqbalance[837]: IRQ 29 affinity is now unmanaged
Oct 09 15:51:12 compute-0 irqbalance[837]: Cannot change IRQ 27 affinity: Operation not permitted
Oct 09 15:51:12 compute-0 irqbalance[837]: IRQ 27 affinity is now unmanaged
Oct 09 15:51:14 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 09 15:51:34 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 09 15:51:49 compute-0 sshd-session[1305]: Accepted publickey for zuul from 192.168.122.30 port 35038 ssh2: ECDSA SHA256:2Vdz7kVNDZnmAnEBdeIC9De7MGoQwU7bxSCyJABiYXo
Oct 09 15:51:49 compute-0 systemd[1]: Created slice User Slice of UID 1000.
Oct 09 15:51:49 compute-0 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 09 15:51:49 compute-0 systemd-logind[841]: New session 1 of user zuul.
Oct 09 15:51:49 compute-0 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 09 15:51:49 compute-0 systemd[1]: Starting User Manager for UID 1000...
Oct 09 15:51:49 compute-0 systemd[1309]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 15:51:49 compute-0 systemd[1309]: Queued start job for default target Main User Target.
Oct 09 15:51:49 compute-0 rsyslogd[1282]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 15:51:49 compute-0 systemd[1309]: Created slice User Application Slice.
Oct 09 15:51:49 compute-0 systemd[1309]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 09 15:51:49 compute-0 systemd[1309]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 15:51:49 compute-0 systemd[1309]: Reached target Paths.
Oct 09 15:51:49 compute-0 systemd[1309]: Reached target Timers.
Oct 09 15:51:49 compute-0 systemd[1309]: Starting D-Bus User Message Bus Socket...
Oct 09 15:51:49 compute-0 systemd[1309]: Starting Create User's Volatile Files and Directories...
Oct 09 15:51:49 compute-0 systemd[1309]: Finished Create User's Volatile Files and Directories.
Oct 09 15:51:49 compute-0 systemd[1309]: Listening on D-Bus User Message Bus Socket.
Oct 09 15:51:49 compute-0 systemd[1309]: Reached target Sockets.
Oct 09 15:51:49 compute-0 systemd[1309]: Reached target Basic System.
Oct 09 15:51:49 compute-0 systemd[1309]: Reached target Main User Target.
Oct 09 15:51:49 compute-0 systemd[1309]: Startup finished in 124ms.
Oct 09 15:51:49 compute-0 systemd[1]: Started User Manager for UID 1000.
Oct 09 15:51:49 compute-0 systemd[1]: Started Session 1 of User zuul.
Oct 09 15:51:49 compute-0 sshd-session[1305]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 15:51:49 compute-0 sudo[1351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxqsvzmfblzaanomzsduzlgfismbupul ; cat /proc/sys/kernel/random/boot_id'
Oct 09 15:51:49 compute-0 sudo[1351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:51:49 compute-0 sudo[1351]: pam_unix(sudo:session): session closed for user root
Oct 09 15:51:50 compute-0 sudo[1380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-morbrynlmojnxrjobywuktmremyqjjts ; whoami'
Oct 09 15:51:50 compute-0 sudo[1380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:51:50 compute-0 sudo[1380]: pam_unix(sudo:session): session closed for user root
Oct 09 15:51:50 compute-0 sudo[1532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhukjukklkklgswlalsslqoxvcnupwva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025110.0235357-230-228511724966905/AnsiballZ_file.py'
Oct 09 15:51:50 compute-0 sudo[1532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:51:50 compute-0 python3.9[1534]: ansible-ansible.builtin.file Invoked with path=/var/lib/openstack/reboot_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:51:50 compute-0 sudo[1532]: pam_unix(sudo:session): session closed for user root
Oct 09 15:51:51 compute-0 sshd-session[1324]: Connection closed by 192.168.122.30 port 35038
Oct 09 15:51:51 compute-0 sshd-session[1305]: pam_unix(sshd:session): session closed for user zuul
Oct 09 15:51:51 compute-0 systemd[1]: session-1.scope: Deactivated successfully.
Oct 09 15:51:51 compute-0 systemd-logind[841]: Session 1 logged out. Waiting for processes to exit.
Oct 09 15:51:51 compute-0 systemd-logind[841]: Removed session 1.
Oct 09 15:51:56 compute-0 sshd-session[1559]: Accepted publickey for zuul from 192.168.122.30 port 54516 ssh2: ECDSA SHA256:2Vdz7kVNDZnmAnEBdeIC9De7MGoQwU7bxSCyJABiYXo
Oct 09 15:51:56 compute-0 systemd-logind[841]: New session 3 of user zuul.
Oct 09 15:51:56 compute-0 systemd[1]: Started Session 3 of User zuul.
Oct 09 15:51:56 compute-0 sshd-session[1559]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 15:51:57 compute-0 python3.9[1712]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 15:51:58 compute-0 sudo[1866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrsvwttzunqyklymgohorryguunjfzuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025118.2356453-80-200264690010812/AnsiballZ_file.py'
Oct 09 15:51:58 compute-0 sudo[1866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:51:59 compute-0 python3.9[1868]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:51:59 compute-0 sudo[1866]: pam_unix(sudo:session): session closed for user root
Oct 09 15:51:59 compute-0 sudo[2018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpvycdlagcmqhwepykkmiwyfgurnhomh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025119.1730406-80-149088509584271/AnsiballZ_file.py'
Oct 09 15:51:59 compute-0 sudo[2018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:51:59 compute-0 python3.9[2020]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:51:59 compute-0 sudo[2018]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:00 compute-0 sudo[2170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvejvnpnzvralimymkzoipzkfvbveznt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025119.7901855-109-229696223357916/AnsiballZ_stat.py'
Oct 09 15:52:00 compute-0 sudo[2170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:00 compute-0 python3.9[2172]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:00 compute-0 sudo[2170]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:01 compute-0 sudo[2293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiuzvooepefpmudzjhwcfneokajjzzav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025119.7901855-109-229696223357916/AnsiballZ_copy.py'
Oct 09 15:52:01 compute-0 sudo[2293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:01 compute-0 python3.9[2295]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025119.7901855-109-229696223357916/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=0639fd7594a53e4dfc6a855c89b869bcc9edab28 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:01 compute-0 sudo[2293]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:01 compute-0 sudo[2445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofciindxneceynbozrvswkhpgammrzdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025121.2303035-109-126538250530380/AnsiballZ_stat.py'
Oct 09 15:52:01 compute-0 sudo[2445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:01 compute-0 python3.9[2447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:01 compute-0 sudo[2445]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:02 compute-0 sudo[2568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opjmfoerjqfmuribpllkhhiqlnudtrzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025121.2303035-109-126538250530380/AnsiballZ_copy.py'
Oct 09 15:52:02 compute-0 sudo[2568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:02 compute-0 python3.9[2570]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025121.2303035-109-126538250530380/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=48af746c7c0f8b5929bfd4e3b4e018a82f622903 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:02 compute-0 sudo[2568]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:02 compute-0 sudo[2720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkosoazwgsqkdekbsrvdxbaynxcfbugv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025122.3970573-109-46078885581804/AnsiballZ_stat.py'
Oct 09 15:52:02 compute-0 sudo[2720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:03 compute-0 python3.9[2722]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:03 compute-0 sudo[2720]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:03 compute-0 sudo[2843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwvlsbuknbhutuunhqzpzxzblbfrloif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025122.3970573-109-46078885581804/AnsiballZ_copy.py'
Oct 09 15:52:03 compute-0 sudo[2843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:03 compute-0 python3.9[2845]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025122.3970573-109-46078885581804/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=b32472702b9f7234d29ef9305674a6dee2d429aa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:03 compute-0 sudo[2843]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:04 compute-0 sudo[2995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pntbdudscpurntjofcgupaakwljdikoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025123.585652-200-141914066446275/AnsiballZ_file.py'
Oct 09 15:52:04 compute-0 sudo[2995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:04 compute-0 python3.9[2997]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:52:04 compute-0 sudo[2995]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:04 compute-0 sudo[3147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbpnniybxefscpjcpvigdkhsmkptlrbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025124.3006694-200-33505456886271/AnsiballZ_file.py'
Oct 09 15:52:04 compute-0 sudo[3147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:05 compute-0 python3.9[3149]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:52:05 compute-0 sudo[3147]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:05 compute-0 sudo[3299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvwhteupqobwckeeamwlquedfzdcuuul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025124.9026914-230-232759108470636/AnsiballZ_stat.py'
Oct 09 15:52:05 compute-0 sudo[3299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:05 compute-0 python3.9[3301]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:05 compute-0 sudo[3299]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:05 compute-0 sudo[3422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxxqgjufwkvvyfoeodlyytxvtxmgeiou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025124.9026914-230-232759108470636/AnsiballZ_copy.py'
Oct 09 15:52:05 compute-0 sudo[3422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:06 compute-0 python3.9[3424]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025124.9026914-230-232759108470636/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=7c63721df9424a360d2ed9340dd033ee59282f0b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:06 compute-0 sudo[3422]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:06 compute-0 sudo[3574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azlraamtwkxhdvbynbdsvjxlqnamasyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025126.0320005-230-65030648692627/AnsiballZ_stat.py'
Oct 09 15:52:06 compute-0 sudo[3574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:06 compute-0 python3.9[3576]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:06 compute-0 sudo[3574]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:07 compute-0 sudo[3697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgdjxanqmtcruxvhnaxgojuniddeuuov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025126.0320005-230-65030648692627/AnsiballZ_copy.py'
Oct 09 15:52:07 compute-0 sudo[3697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:07 compute-0 python3.9[3699]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025126.0320005-230-65030648692627/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a5268c279264853a7f073bf039fb37aad2c9fb52 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:07 compute-0 sudo[3697]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:07 compute-0 sudo[3849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmfbbejruidkpverqdgvntjskafdkpbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025127.156488-230-275803589441161/AnsiballZ_stat.py'
Oct 09 15:52:07 compute-0 sudo[3849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:07 compute-0 python3.9[3851]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:07 compute-0 sudo[3849]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:08 compute-0 sudo[3972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjjsxdnpelojnljmggxgdxefgwlupxcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025127.156488-230-275803589441161/AnsiballZ_copy.py'
Oct 09 15:52:08 compute-0 sudo[3972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:08 compute-0 python3.9[3974]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025127.156488-230-275803589441161/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=b2d749afbf797ef86f74a22d8e9c2739e1df9a82 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:08 compute-0 sudo[3972]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:08 compute-0 sudo[4124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vajbpiophwubqfriyhkbolypmbfxxcmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025128.4094365-315-74360906768130/AnsiballZ_file.py'
Oct 09 15:52:08 compute-0 sudo[4124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:09 compute-0 python3.9[4126]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:52:09 compute-0 sudo[4124]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:09 compute-0 sudo[4276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqcmxnjqojeionxeleuvnesslmyfbrbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025129.142711-315-255669761537904/AnsiballZ_file.py'
Oct 09 15:52:09 compute-0 sudo[4276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:09 compute-0 python3.9[4278]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:52:09 compute-0 sudo[4276]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:10 compute-0 sudo[4428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhonervhhtqtosaajupnkjydzogymhwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025129.8055308-346-225077481782877/AnsiballZ_stat.py'
Oct 09 15:52:10 compute-0 sudo[4428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:10 compute-0 python3.9[4430]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:10 compute-0 sudo[4428]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:10 compute-0 sudo[4551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czziahvayjmvtisrzszkcxkuwlkamwpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025129.8055308-346-225077481782877/AnsiballZ_copy.py'
Oct 09 15:52:10 compute-0 sudo[4551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:11 compute-0 python3.9[4553]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025129.8055308-346-225077481782877/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=e21f9e3699db33739110aae88e5c8675130ac192 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:11 compute-0 sudo[4551]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:11 compute-0 sudo[4703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtcmjnjslvnutdtnqriourpegpcjqtjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025130.8931887-346-174890949633135/AnsiballZ_stat.py'
Oct 09 15:52:11 compute-0 sudo[4703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:11 compute-0 python3.9[4705]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:11 compute-0 sudo[4703]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:11 compute-0 sudo[4826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uucdikqcdrbqplnxxjlfnfmpjpjuvxat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025130.8931887-346-174890949633135/AnsiballZ_copy.py'
Oct 09 15:52:11 compute-0 sudo[4826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:12 compute-0 python3.9[4828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025130.8931887-346-174890949633135/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=d4c0cb284627ecb2f1ac67b90d497d7aa001f746 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:12 compute-0 sudo[4826]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:12 compute-0 sudo[4978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avlwglyafpkepprzehselcthqhtbbxve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025131.9222462-346-16039190236083/AnsiballZ_stat.py'
Oct 09 15:52:12 compute-0 sudo[4978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:12 compute-0 python3.9[4980]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:12 compute-0 sudo[4978]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:12 compute-0 sudo[5101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdvuzyuxxarvdadeqcpmxgxehrqycpop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025131.9222462-346-16039190236083/AnsiballZ_copy.py'
Oct 09 15:52:12 compute-0 sudo[5101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:13 compute-0 python3.9[5103]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025131.9222462-346-16039190236083/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d0968006736d1f3363c87aaf1ab4b36875688f3e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:13 compute-0 sudo[5101]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:13 compute-0 sudo[5253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwzlcleutttncmvbkjudyhuykklryloi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025133.0460024-432-125996693061164/AnsiballZ_file.py'
Oct 09 15:52:13 compute-0 sudo[5253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:13 compute-0 python3.9[5255]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:52:13 compute-0 sudo[5253]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:14 compute-0 sudo[5405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hblyzdqiawvvrzskvkhelbegbfhsxmlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025133.6918423-432-97646189705280/AnsiballZ_file.py'
Oct 09 15:52:14 compute-0 sudo[5405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:14 compute-0 python3.9[5407]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:52:14 compute-0 sudo[5405]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:15 compute-0 sudo[5557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isshifcbgbwqhhrljqebsfiufabyiukd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025134.5566342-463-30917652485811/AnsiballZ_stat.py'
Oct 09 15:52:15 compute-0 sudo[5557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:15 compute-0 python3.9[5559]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:15 compute-0 sudo[5557]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:15 compute-0 sudo[5680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnaqrbxmrmzuwhwgiiosutqxufmzgcvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025134.5566342-463-30917652485811/AnsiballZ_copy.py'
Oct 09 15:52:15 compute-0 sudo[5680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:15 compute-0 python3.9[5682]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025134.5566342-463-30917652485811/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=866ce8a0be30852a073d9cc0be4890413ccda788 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:15 compute-0 sudo[5680]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:16 compute-0 sudo[5832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efhpldfnpydzveeugnnncgodgzwqpixl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025135.6587296-463-256389107060756/AnsiballZ_stat.py'
Oct 09 15:52:16 compute-0 sudo[5832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:16 compute-0 python3.9[5834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:16 compute-0 sudo[5832]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:16 compute-0 sudo[5955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjxscyjqltlxcmzgsjxcjzqswpgoxixz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025135.6587296-463-256389107060756/AnsiballZ_copy.py'
Oct 09 15:52:16 compute-0 sudo[5955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:16 compute-0 python3.9[5957]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025135.6587296-463-256389107060756/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=d4c0cb284627ecb2f1ac67b90d497d7aa001f746 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:16 compute-0 sudo[5955]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:17 compute-0 sudo[6107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxjthvkzaecjxsrufrwzeyduaquatohp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025136.8264432-463-10784065129989/AnsiballZ_stat.py'
Oct 09 15:52:17 compute-0 sudo[6107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:17 compute-0 python3.9[6109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:17 compute-0 sudo[6107]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:17 compute-0 sudo[6230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipbqmrzyavadlpvoaoorivrkhyvckizn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025136.8264432-463-10784065129989/AnsiballZ_copy.py'
Oct 09 15:52:17 compute-0 sudo[6230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:18 compute-0 python3.9[6232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025136.8264432-463-10784065129989/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=806fbdfd2b761e0ab47e6dcbcc013ec481c9a626 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:18 compute-0 sudo[6230]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:19 compute-0 sudo[6382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbaqgfslnfjhefnoenfwbkcbxwtxxxgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025138.659992-580-129638833178338/AnsiballZ_file.py'
Oct 09 15:52:19 compute-0 sudo[6382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:19 compute-0 python3.9[6384]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:52:19 compute-0 sudo[6382]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:20 compute-0 sudo[6534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drfmigdxxjlonivzugxelabmsbnfzpah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025139.4801269-598-267146247805380/AnsiballZ_stat.py'
Oct 09 15:52:20 compute-0 sudo[6534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:20 compute-0 python3.9[6536]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:20 compute-0 sudo[6534]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:20 compute-0 sudo[6657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynfbraqpxvwxwfpfehdpacokqyilgvuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025139.4801269-598-267146247805380/AnsiballZ_copy.py'
Oct 09 15:52:20 compute-0 sudo[6657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:20 compute-0 python3.9[6659]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025139.4801269-598-267146247805380/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1676a7865dd6d232d9ec6442ae64ac56e99a1ba9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:20 compute-0 sudo[6657]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:21 compute-0 sudo[6809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fftllxxqtzrzbxpzmwjxyagujikjjvqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025140.588333-626-201990665997207/AnsiballZ_file.py'
Oct 09 15:52:21 compute-0 sudo[6809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:21 compute-0 python3.9[6811]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:52:21 compute-0 sudo[6809]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:21 compute-0 sudo[6961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccqimlwzxmbruymyrmbzqbfrvubtuolg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025141.212945-642-230263798048751/AnsiballZ_stat.py'
Oct 09 15:52:21 compute-0 sudo[6961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:21 compute-0 python3.9[6963]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:21 compute-0 sudo[6961]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:22 compute-0 sudo[7084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dajualqolomrwfqyuafviizvyaaalmbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025141.212945-642-230263798048751/AnsiballZ_copy.py'
Oct 09 15:52:22 compute-0 sudo[7084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:22 compute-0 python3.9[7086]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025141.212945-642-230263798048751/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1676a7865dd6d232d9ec6442ae64ac56e99a1ba9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:22 compute-0 sudo[7084]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:22 compute-0 sudo[7236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhvbldyzilmvhqemhjazilfxhxbstnzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025142.393154-671-165801110397969/AnsiballZ_file.py'
Oct 09 15:52:22 compute-0 sudo[7236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:23 compute-0 python3.9[7238]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:52:23 compute-0 sudo[7236]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:23 compute-0 sudo[7388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsfnicotnjjosbbkvxqfbpqordicwqmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025142.9819274-686-105448755650560/AnsiballZ_stat.py'
Oct 09 15:52:23 compute-0 sudo[7388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:23 compute-0 python3.9[7390]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:23 compute-0 sudo[7388]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:24 compute-0 sudo[7511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyzkatldjascviqkydqijetbbozmytxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025142.9819274-686-105448755650560/AnsiballZ_copy.py'
Oct 09 15:52:24 compute-0 sudo[7511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:24 compute-0 python3.9[7513]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025142.9819274-686-105448755650560/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1676a7865dd6d232d9ec6442ae64ac56e99a1ba9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:24 compute-0 sudo[7511]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:24 compute-0 sudo[7663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghyzutazwnbddcwzkzwyqisvjlezmuyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025144.1330366-719-193381779444767/AnsiballZ_file.py'
Oct 09 15:52:24 compute-0 sudo[7663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:24 compute-0 python3.9[7665]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:52:24 compute-0 sudo[7663]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:25 compute-0 sudo[7815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pggooewqatienernrhywzofsreknrasl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025144.7253878-734-127368743874281/AnsiballZ_stat.py'
Oct 09 15:52:25 compute-0 sudo[7815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:25 compute-0 python3.9[7817]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:25 compute-0 sudo[7815]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:25 compute-0 sudo[7938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siykcatuzecfcqopkubseutjpcugdvmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025144.7253878-734-127368743874281/AnsiballZ_copy.py'
Oct 09 15:52:25 compute-0 sudo[7938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:25 compute-0 python3.9[7940]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025144.7253878-734-127368743874281/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1676a7865dd6d232d9ec6442ae64ac56e99a1ba9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:25 compute-0 sudo[7938]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:26 compute-0 sudo[8090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbxahjxzwnedbfzhqussawajgejjvcfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025145.8538363-763-30708428671936/AnsiballZ_file.py'
Oct 09 15:52:26 compute-0 sudo[8090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:26 compute-0 python3.9[8092]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:52:26 compute-0 sudo[8090]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:27 compute-0 sudo[8242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgcpkkcfeuyadbvjccadsxrpghymkxhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025146.4646711-777-138440195410534/AnsiballZ_stat.py'
Oct 09 15:52:27 compute-0 sudo[8242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:27 compute-0 python3.9[8244]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:27 compute-0 sudo[8242]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:27 compute-0 sudo[8365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aobrlkwnnhtvbgmpqjjolwlornqxojop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025146.4646711-777-138440195410534/AnsiballZ_copy.py'
Oct 09 15:52:27 compute-0 sudo[8365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:27 compute-0 python3.9[8367]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025146.4646711-777-138440195410534/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1676a7865dd6d232d9ec6442ae64ac56e99a1ba9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:27 compute-0 sudo[8365]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:28 compute-0 sudo[8517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tndxooajrixisisnxgzuayyajulgazlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025147.5994327-807-95568440846528/AnsiballZ_file.py'
Oct 09 15:52:28 compute-0 sudo[8517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:28 compute-0 python3.9[8519]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:52:28 compute-0 sudo[8517]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:28 compute-0 sudo[8669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fevxbcgrwilaoawkrchtubrhkdwmkure ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025148.2483318-823-150635068156122/AnsiballZ_stat.py'
Oct 09 15:52:28 compute-0 sudo[8669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:28 compute-0 python3.9[8671]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:29 compute-0 sudo[8669]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:29 compute-0 sudo[8792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btfolktivdyeczwhgdodgimdygqyfhoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025148.2483318-823-150635068156122/AnsiballZ_copy.py'
Oct 09 15:52:29 compute-0 sudo[8792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:29 compute-0 python3.9[8794]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025148.2483318-823-150635068156122/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1676a7865dd6d232d9ec6442ae64ac56e99a1ba9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:29 compute-0 sudo[8792]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:30 compute-0 sudo[8944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhcthjszzcbroaptgglnasblwoohepro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025149.5517497-855-245197755909957/AnsiballZ_file.py'
Oct 09 15:52:30 compute-0 sudo[8944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:30 compute-0 python3.9[8946]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:52:30 compute-0 sudo[8944]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:30 compute-0 sudo[9096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzkotcyrtplaixbvapqxmhdhqdzdvmxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025150.4353225-871-214671182821145/AnsiballZ_stat.py'
Oct 09 15:52:30 compute-0 sudo[9096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:31 compute-0 python3.9[9098]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:31 compute-0 sudo[9096]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:31 compute-0 sudo[9219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfgqntmfkltynrdervfmfkzhwrqnagcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025150.4353225-871-214671182821145/AnsiballZ_copy.py'
Oct 09 15:52:31 compute-0 sudo[9219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:31 compute-0 python3.9[9221]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025150.4353225-871-214671182821145/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1676a7865dd6d232d9ec6442ae64ac56e99a1ba9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:31 compute-0 sudo[9219]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:33 compute-0 sshd-session[1562]: Connection closed by 192.168.122.30 port 54516
Oct 09 15:52:33 compute-0 sshd-session[1559]: pam_unix(sshd:session): session closed for user zuul
Oct 09 15:52:33 compute-0 systemd[1]: session-3.scope: Deactivated successfully.
Oct 09 15:52:33 compute-0 systemd[1]: session-3.scope: Consumed 27.517s CPU time.
Oct 09 15:52:33 compute-0 systemd-logind[841]: Session 3 logged out. Waiting for processes to exit.
Oct 09 15:52:33 compute-0 systemd-logind[841]: Removed session 3.
Oct 09 15:52:38 compute-0 sshd-session[9246]: Accepted publickey for zuul from 192.168.122.30 port 59054 ssh2: ECDSA SHA256:2Vdz7kVNDZnmAnEBdeIC9De7MGoQwU7bxSCyJABiYXo
Oct 09 15:52:38 compute-0 systemd-logind[841]: New session 4 of user zuul.
Oct 09 15:52:38 compute-0 systemd[1]: Started Session 4 of User zuul.
Oct 09 15:52:38 compute-0 sshd-session[9246]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 15:52:39 compute-0 python3.9[9399]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 15:52:40 compute-0 sudo[9553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nflxrzwzaapqyytgnjihlngeitlyrsih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025160.0972815-48-159318722361795/AnsiballZ_file.py'
Oct 09 15:52:40 compute-0 sudo[9553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:41 compute-0 python3.9[9555]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:52:41 compute-0 sudo[9553]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:41 compute-0 sudo[9705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlhuuecjwuxugsexaaxuhrrkqozlefnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025160.8999062-48-95635232154838/AnsiballZ_file.py'
Oct 09 15:52:41 compute-0 sudo[9705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:41 compute-0 python3.9[9707]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:52:41 compute-0 sudo[9705]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:42 compute-0 python3.9[9857]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 15:52:42 compute-0 sudo[10007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgbodoqzzentcgqgnlmcubqynsxdtctr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025162.2735608-94-141088525061214/AnsiballZ_seboolean.py'
Oct 09 15:52:42 compute-0 sudo[10007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:43 compute-0 python3.9[10009]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 09 15:52:44 compute-0 sudo[10007]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:45 compute-0 sudo[10166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeiwsvaruktbrazgmznoyvbzvtkifyxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025165.0141335-114-181984309203858/AnsiballZ_setup.py'
Oct 09 15:52:45 compute-0 dbus-broker-launch[833]: avc:  op=load_policy lsm=selinux seqno=2 res=1
Oct 09 15:52:45 compute-0 sudo[10166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:45 compute-0 python3.9[10168]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 15:52:46 compute-0 sudo[10166]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:46 compute-0 sudo[10250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdyksdoomvlurjjwvtuiiziqbjwgvgdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025165.0141335-114-181984309203858/AnsiballZ_dnf.py'
Oct 09 15:52:46 compute-0 sudo[10250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:46 compute-0 python3.9[10252]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 15:52:48 compute-0 sudo[10250]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:49 compute-0 sudo[10403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvxxtzicshknqezjjlrcotniumaqiycu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025168.4016762-138-55863325530243/AnsiballZ_systemd.py'
Oct 09 15:52:49 compute-0 sudo[10403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:49 compute-0 python3.9[10405]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 15:52:49 compute-0 sudo[10403]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:50 compute-0 sudo[10558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgfmiieeapucjcfwiqptmrqfmbxprlve ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760025169.5485454-154-235924527020406/AnsiballZ_edpm_nftables_snippet.py'
Oct 09 15:52:50 compute-0 sudo[10558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:50 compute-0 python3[10560]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 09 15:52:50 compute-0 sudo[10558]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:51 compute-0 sudo[10710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lktmgmmmtsxqlcdopfcggwsrdtkeluib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025170.4798791-172-251826547070486/AnsiballZ_file.py'
Oct 09 15:52:51 compute-0 sudo[10710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:51 compute-0 python3.9[10712]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:51 compute-0 sudo[10710]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:51 compute-0 sudo[10862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypswikzpwejytwcxwitaxmtyyfxoilrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025171.123154-188-276346034620044/AnsiballZ_stat.py'
Oct 09 15:52:51 compute-0 sudo[10862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:52 compute-0 python3.9[10864]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:52 compute-0 sudo[10862]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:52 compute-0 sudo[10940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhkapkffgsqycxzkudijqkmdwuhvmdzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025171.123154-188-276346034620044/AnsiballZ_file.py'
Oct 09 15:52:52 compute-0 sudo[10940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:52 compute-0 python3.9[10942]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:52 compute-0 sudo[10940]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:52 compute-0 sudo[11092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slgglgofozhsstuasptepahiwhomsxqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025172.336754-212-233125731627700/AnsiballZ_stat.py'
Oct 09 15:52:52 compute-0 sudo[11092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:53 compute-0 python3.9[11094]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:53 compute-0 sudo[11092]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:53 compute-0 sudo[11170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elqsdzyotpouxeukrrcjlvmzozwlntgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025172.336754-212-233125731627700/AnsiballZ_file.py'
Oct 09 15:52:53 compute-0 sudo[11170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:53 compute-0 python3.9[11172]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.77jobb2a recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:53 compute-0 sudo[11170]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:53 compute-0 sudo[11322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmhinmevvyjqrymiadfsznqykswmofbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025173.3973444-236-6397187834274/AnsiballZ_stat.py'
Oct 09 15:52:53 compute-0 sudo[11322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:54 compute-0 python3.9[11324]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:54 compute-0 sudo[11322]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:54 compute-0 sudo[11400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kippgmstyxdvxjmzaqxtpicgbwcygrgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025173.3973444-236-6397187834274/AnsiballZ_file.py'
Oct 09 15:52:54 compute-0 sudo[11400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:54 compute-0 python3.9[11402]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:54 compute-0 sudo[11400]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:55 compute-0 sudo[11552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skwoivhfaijtfpkbzzjkqlhmfubjbovd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025174.5861247-262-240112288784236/AnsiballZ_command.py'
Oct 09 15:52:55 compute-0 sudo[11552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:55 compute-0 python3.9[11554]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:52:55 compute-0 sudo[11552]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:56 compute-0 sudo[11705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aicckemagdkcbwjfjefuelbjzznrmsfz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760025175.4388425-278-76652738367196/AnsiballZ_edpm_nftables_from_files.py'
Oct 09 15:52:56 compute-0 sudo[11705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:56 compute-0 python3[11707]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 09 15:52:56 compute-0 sudo[11705]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:56 compute-0 sudo[11857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpcytovdwryeealhqapycjdldjrxfzit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025176.3666096-294-128965179243444/AnsiballZ_stat.py'
Oct 09 15:52:56 compute-0 sudo[11857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:57 compute-0 python3.9[11859]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:57 compute-0 sudo[11857]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:57 compute-0 sudo[11982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izqzzvddgtuemkgwkswledmhkgxzncni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025176.3666096-294-128965179243444/AnsiballZ_copy.py'
Oct 09 15:52:57 compute-0 sudo[11982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:57 compute-0 python3.9[11984]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025176.3666096-294-128965179243444/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:57 compute-0 sudo[11982]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:58 compute-0 sudo[12134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tutgfzdqaktwxfdpjewyqzpgyqjfyswc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025177.8257773-324-45251035440419/AnsiballZ_stat.py'
Oct 09 15:52:58 compute-0 sudo[12134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:58 compute-0 python3.9[12136]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:58 compute-0 sudo[12134]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:58 compute-0 sudo[12259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbsoxwitgypkrkezitcnnzxujjqxktig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025177.8257773-324-45251035440419/AnsiballZ_copy.py'
Oct 09 15:52:58 compute-0 sudo[12259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:59 compute-0 python3.9[12261]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025177.8257773-324-45251035440419/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:52:59 compute-0 sudo[12259]: pam_unix(sudo:session): session closed for user root
Oct 09 15:52:59 compute-0 sudo[12411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bafujisxniywxfsveoikrdbungshuyqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025179.0538476-354-10715054195139/AnsiballZ_stat.py'
Oct 09 15:52:59 compute-0 sudo[12411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:52:59 compute-0 python3.9[12413]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:52:59 compute-0 sudo[12411]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:00 compute-0 sudo[12536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlukdcywgnocfikupkzbjgaxozdvrgxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025179.0538476-354-10715054195139/AnsiballZ_copy.py'
Oct 09 15:53:00 compute-0 sudo[12536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:00 compute-0 python3.9[12538]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025179.0538476-354-10715054195139/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:53:00 compute-0 sudo[12536]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:00 compute-0 sudo[12688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwqhunosmfcwbsixefptkldrztvcsmjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025180.3449483-384-29415899873630/AnsiballZ_stat.py'
Oct 09 15:53:00 compute-0 sudo[12688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:01 compute-0 python3.9[12690]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:53:01 compute-0 sudo[12688]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:01 compute-0 sudo[12813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdzvddqqgaurrbkgajkfxijqcmudbabn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025180.3449483-384-29415899873630/AnsiballZ_copy.py'
Oct 09 15:53:01 compute-0 sudo[12813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:01 compute-0 python3.9[12815]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025180.3449483-384-29415899873630/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:53:01 compute-0 sudo[12813]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:02 compute-0 sudo[12965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipmigjbbcptpbisbicoknlhhpiymctxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025181.606578-414-236347644977240/AnsiballZ_stat.py'
Oct 09 15:53:02 compute-0 sudo[12965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:02 compute-0 python3.9[12967]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:53:02 compute-0 sudo[12965]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:02 compute-0 sudo[13090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvrgdhhxpksempxqpkgidcjzmmmynozc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025181.606578-414-236347644977240/AnsiballZ_copy.py'
Oct 09 15:53:02 compute-0 sudo[13090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:02 compute-0 python3.9[13092]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025181.606578-414-236347644977240/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:53:02 compute-0 sudo[13090]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:03 compute-0 sudo[13242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytxznrphxzzwsahmdosswpbokvpcsien ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025182.9361587-444-50572583820919/AnsiballZ_file.py'
Oct 09 15:53:03 compute-0 sudo[13242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:03 compute-0 python3.9[13244]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:53:03 compute-0 sudo[13242]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:04 compute-0 sudo[13394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpuczoihlsqpcftmtlcbmfapkjpgwrfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025183.6164873-460-164419611073014/AnsiballZ_command.py'
Oct 09 15:53:04 compute-0 sudo[13394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:04 compute-0 python3.9[13396]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:53:04 compute-0 sudo[13394]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:05 compute-0 sudo[13549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eannuxkyxeslscboyfvzetazrvckxkix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025184.393703-476-59762978993908/AnsiballZ_blockinfile.py'
Oct 09 15:53:05 compute-0 sudo[13549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:05 compute-0 python3.9[13551]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:53:05 compute-0 sudo[13549]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:05 compute-0 sudo[13701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mngwqgxoeqdpmgrpzqnjstzmbmrjmigo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025185.2887788-494-216963569213402/AnsiballZ_command.py'
Oct 09 15:53:05 compute-0 sudo[13701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:06 compute-0 python3.9[13703]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:53:06 compute-0 sudo[13701]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:06 compute-0 sudo[13854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbgevejimroqnkwkppuhsuoolznvvkwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025186.0001159-510-93055612553025/AnsiballZ_stat.py'
Oct 09 15:53:06 compute-0 sudo[13854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:06 compute-0 python3.9[13856]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 15:53:06 compute-0 sudo[13854]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:07 compute-0 sudo[14008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpyygbwimgtuiydlntxzzkncdkebyjkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025186.6719038-526-8174651240827/AnsiballZ_command.py'
Oct 09 15:53:07 compute-0 sudo[14008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:07 compute-0 python3.9[14010]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:53:07 compute-0 sudo[14008]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:07 compute-0 sudo[14163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwcotnotqcxaityjbvlzbgvrksjtlpkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025187.44557-542-135124774575552/AnsiballZ_file.py'
Oct 09 15:53:07 compute-0 sudo[14163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:08 compute-0 python3.9[14165]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:53:08 compute-0 sudo[14163]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:09 compute-0 python3.9[14315]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 15:53:10 compute-0 sudo[14466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eskdjtcqntuvkrkuvfhcizeoteltgcsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025189.5547607-622-235937617647562/AnsiballZ_command.py'
Oct 09 15:53:10 compute-0 sudo[14466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:10 compute-0 python3.9[14468]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:53:10 compute-0 ovs-vsctl[14469]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 09 15:53:10 compute-0 sudo[14466]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:10 compute-0 sudo[14619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keyhmhiduprekmrsxbmslgktedqevjpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025190.3456116-640-265350189115148/AnsiballZ_command.py'
Oct 09 15:53:10 compute-0 sudo[14619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:11 compute-0 python3.9[14621]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:53:11 compute-0 sudo[14619]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:11 compute-0 sudo[14774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmatwpwpebrdrxywuyurglgidiexejyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025190.9871757-656-199226381224527/AnsiballZ_command.py'
Oct 09 15:53:11 compute-0 sudo[14774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:11 compute-0 python3.9[14776]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:53:11 compute-0 ovs-vsctl[14777]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 09 15:53:11 compute-0 sudo[14774]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:12 compute-0 python3.9[14927]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 15:53:12 compute-0 sudo[15079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifoindxgznkanjkibbrpzcawcacvounl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025192.4084992-690-56606953005083/AnsiballZ_file.py'
Oct 09 15:53:12 compute-0 sudo[15079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:13 compute-0 python3.9[15081]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:53:13 compute-0 sudo[15079]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:13 compute-0 sudo[15231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-namlbmgispdaaidohjgtykknqssefhnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025193.1065576-706-239778149247291/AnsiballZ_stat.py'
Oct 09 15:53:13 compute-0 sudo[15231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:13 compute-0 python3.9[15233]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:53:13 compute-0 sudo[15231]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:14 compute-0 sudo[15309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmhczciyxquuewvxumsohrdygybrsqsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025193.1065576-706-239778149247291/AnsiballZ_file.py'
Oct 09 15:53:14 compute-0 sudo[15309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:14 compute-0 python3.9[15311]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:53:14 compute-0 sudo[15309]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:14 compute-0 sudo[15461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjrlzkycksrfqcferexdrjkwtwyywqve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025194.2803733-706-178777231798097/AnsiballZ_stat.py'
Oct 09 15:53:14 compute-0 sudo[15461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:15 compute-0 python3.9[15463]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:53:15 compute-0 sudo[15461]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:15 compute-0 sudo[15539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwgcijoecrwckylpucjimvizfabxediq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025194.2803733-706-178777231798097/AnsiballZ_file.py'
Oct 09 15:53:15 compute-0 sudo[15539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:15 compute-0 python3.9[15541]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:53:15 compute-0 sudo[15539]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:16 compute-0 sudo[15691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxzxwijumekavactolzwmlfwzxrmjtjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025195.5052981-752-212335741044498/AnsiballZ_file.py'
Oct 09 15:53:16 compute-0 sudo[15691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:16 compute-0 python3.9[15693]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:53:16 compute-0 sudo[15691]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:16 compute-0 sudo[15843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdswjeoocuxwlpaxpivhimvrylttkcxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025196.182724-768-124649849984868/AnsiballZ_stat.py'
Oct 09 15:53:16 compute-0 sudo[15843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:16 compute-0 python3.9[15845]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:53:16 compute-0 sudo[15843]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:17 compute-0 sudo[15921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oixgexrwergoukqpdcizgkdgtqscwceb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025196.182724-768-124649849984868/AnsiballZ_file.py'
Oct 09 15:53:17 compute-0 sudo[15921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:17 compute-0 python3.9[15923]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:53:17 compute-0 sudo[15921]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:17 compute-0 sudo[16073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynmhqahbpsvysrmyeszxbusgamyejzts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025197.343823-792-210016446245380/AnsiballZ_stat.py'
Oct 09 15:53:17 compute-0 sudo[16073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:18 compute-0 python3.9[16075]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:53:18 compute-0 sudo[16073]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:18 compute-0 sudo[16151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hznuhhphucnpczuvpvlvuxyxvjziuagt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025197.343823-792-210016446245380/AnsiballZ_file.py'
Oct 09 15:53:18 compute-0 sudo[16151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:18 compute-0 python3.9[16153]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:53:18 compute-0 sudo[16151]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:19 compute-0 sudo[16303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpklwtufdnokbyuewxxlcmjpuemhawqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025198.5774617-816-168044716243057/AnsiballZ_systemd.py'
Oct 09 15:53:19 compute-0 sudo[16303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:19 compute-0 python3.9[16305]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 15:53:19 compute-0 systemd[1]: Reloading.
Oct 09 15:53:19 compute-0 systemd-sysv-generator[16336]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:53:19 compute-0 systemd-rc-local-generator[16333]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:53:19 compute-0 sudo[16303]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:20 compute-0 sudo[16493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glebyrrprkmsznutuzgyywjpheozmxdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025199.6826713-832-121074506245181/AnsiballZ_stat.py'
Oct 09 15:53:20 compute-0 sudo[16493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:20 compute-0 python3.9[16495]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:53:20 compute-0 sudo[16493]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:20 compute-0 sudo[16571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhdqtjoeztxqnlxaktaqqfifbjnjrcke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025199.6826713-832-121074506245181/AnsiballZ_file.py'
Oct 09 15:53:20 compute-0 sudo[16571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:20 compute-0 python3.9[16573]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:53:20 compute-0 sudo[16571]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:21 compute-0 sudo[16723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uazhipwoybwisrddspvpestgtqwisddg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025200.830716-856-53842054368928/AnsiballZ_stat.py'
Oct 09 15:53:21 compute-0 sudo[16723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:21 compute-0 python3.9[16725]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:53:21 compute-0 sudo[16723]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:21 compute-0 sudo[16801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thxjgcmwpvaywtovluakynjcdasaysxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025200.830716-856-53842054368928/AnsiballZ_file.py'
Oct 09 15:53:21 compute-0 sudo[16801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:22 compute-0 python3.9[16803]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:53:22 compute-0 sudo[16801]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:22 compute-0 sudo[16953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iosuapsudqvzjvhizyivrpzglltqsjya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025202.004834-880-268868407677141/AnsiballZ_systemd.py'
Oct 09 15:53:22 compute-0 sudo[16953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:22 compute-0 python3.9[16955]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 15:53:22 compute-0 systemd[1]: Reloading.
Oct 09 15:53:22 compute-0 systemd-rc-local-generator[16984]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:53:22 compute-0 systemd-sysv-generator[16989]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:53:23 compute-0 systemd[1]: Starting Create netns directory...
Oct 09 15:53:23 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 09 15:53:23 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 09 15:53:23 compute-0 systemd[1]: Finished Create netns directory.
Oct 09 15:53:23 compute-0 sudo[16953]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:23 compute-0 sudo[17147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhofhgtllkmsdvptmlznkmznejohuvyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025203.1236043-900-99236346319194/AnsiballZ_file.py'
Oct 09 15:53:23 compute-0 sudo[17147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:23 compute-0 python3.9[17149]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:53:23 compute-0 sudo[17147]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:24 compute-0 sudo[17299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jssuuvebjnuqeekcrtlxbnprwsfsfdeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025203.8119206-916-44155487600295/AnsiballZ_stat.py'
Oct 09 15:53:24 compute-0 sudo[17299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:24 compute-0 python3.9[17301]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:53:24 compute-0 sudo[17299]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:24 compute-0 sudo[17422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reldftmqmvsaoslzbhgbqapszpzlmbsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025203.8119206-916-44155487600295/AnsiballZ_copy.py'
Oct 09 15:53:24 compute-0 sudo[17422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:24 compute-0 chronyd[847]: Selected source 162.159.200.123 (pool.ntp.org)
Oct 09 15:53:25 compute-0 python3.9[17424]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760025203.8119206-916-44155487600295/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:53:25 compute-0 sudo[17422]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:25 compute-0 sudo[17574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwkiorqffdmuomyhkwmqfjpwfpcmmwys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025205.3461912-950-241829850437129/AnsiballZ_file.py'
Oct 09 15:53:25 compute-0 sudo[17574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:25 compute-0 python3.9[17576]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:53:25 compute-0 sudo[17574]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:26 compute-0 sudo[17726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbwfpsventhnomhqkjxwlysadjmiqcpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025206.0026035-966-116572532158168/AnsiballZ_stat.py'
Oct 09 15:53:26 compute-0 sudo[17726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:26 compute-0 python3.9[17728]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:53:26 compute-0 sudo[17726]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:26 compute-0 sudo[17849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxscfrkwxcpeotffverxlpuxipcyxclc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025206.0026035-966-116572532158168/AnsiballZ_copy.py'
Oct 09 15:53:26 compute-0 sudo[17849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:27 compute-0 python3.9[17851]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025206.0026035-966-116572532158168/.source.json _original_basename=.2em5ksfk follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:53:27 compute-0 sudo[17849]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:27 compute-0 sudo[18001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpwhxpcuomdpnfxmgluaehampaiswqig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025207.1687362-996-190262780178690/AnsiballZ_file.py'
Oct 09 15:53:27 compute-0 sudo[18001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:27 compute-0 python3.9[18003]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:53:27 compute-0 sudo[18001]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:28 compute-0 sudo[18153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejhvpodwoooacjhctjivnrlqwqeiybbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025207.8898501-1012-121696505619339/AnsiballZ_stat.py'
Oct 09 15:53:28 compute-0 sudo[18153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:28 compute-0 sudo[18153]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:28 compute-0 sudo[18276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axloubmnipkyagyypowgjpfrbzvofase ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025207.8898501-1012-121696505619339/AnsiballZ_copy.py'
Oct 09 15:53:28 compute-0 sudo[18276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:28 compute-0 sudo[18276]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:29 compute-0 sudo[18428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbmgzlospvgaoewpznnjcxxsnvxfeadi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025209.2280357-1046-151386819819045/AnsiballZ_container_config_data.py'
Oct 09 15:53:29 compute-0 sudo[18428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:29 compute-0 python3.9[18430]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 09 15:53:29 compute-0 sudo[18428]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:30 compute-0 sudo[18580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzztkaxlykbltjaiulpvqognmilgluvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025210.0758014-1064-140184225588989/AnsiballZ_container_config_hash.py'
Oct 09 15:53:30 compute-0 sudo[18580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:30 compute-0 python3.9[18582]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 15:53:30 compute-0 sudo[18580]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:31 compute-0 sudo[18732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxzasmucysapogyxyeeotzzeodfklsoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025211.166559-1082-129784818004784/AnsiballZ_podman_container_info.py'
Oct 09 15:53:31 compute-0 sudo[18732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:31 compute-0 python3.9[18734]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 09 15:53:31 compute-0 kernel: evm: overlay not supported
Oct 09 15:53:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck3946279890-merged.mount: Deactivated successfully.
Oct 09 15:53:31 compute-0 podman[18735]: 2025-10-09 15:53:31.914390692 +0000 UTC m=+0.115562352 system refresh
Oct 09 15:53:31 compute-0 sudo[18732]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:32 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 15:53:33 compute-0 sudo[18901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icccxkhqkhoezrywvqajuffbvyccsezs ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760025212.4915292-1108-74987089332979/AnsiballZ_edpm_container_manage.py'
Oct 09 15:53:33 compute-0 sudo[18901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:33 compute-0 python3[18903]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 15:53:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 15:53:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 15:53:33 compute-0 podman[18937]: 2025-10-09 15:53:33.52536417 +0000 UTC m=+0.060437806 container create d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, tcib_build_tag=watcher_latest, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2)
Oct 09 15:53:33 compute-0 podman[18937]: 2025-10-09 15:53:33.4875584 +0000 UTC m=+0.022632056 image pull 6af723f1aecfe1dfd1b61f3dbaa8401ede4395a376dac8bc472196f01505fea6 38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest
Oct 09 15:53:33 compute-0 python3[18903]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z 38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest
Oct 09 15:53:33 compute-0 sudo[18901]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:34 compute-0 sudo[19127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iazelpywtmhwarjpgehcvxyhlpggizfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025213.8879647-1124-98180388371713/AnsiballZ_stat.py'
Oct 09 15:53:34 compute-0 sudo[19127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:34 compute-0 python3.9[19129]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 15:53:34 compute-0 sudo[19127]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:34 compute-0 sudo[19281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djnrmegahwhnwnphtdzsovbztdrerzib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025214.6631432-1142-25170633694335/AnsiballZ_file.py'
Oct 09 15:53:34 compute-0 sudo[19281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:35 compute-0 python3.9[19283]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:53:35 compute-0 sudo[19281]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:35 compute-0 sudo[19357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjrzyxxfjfjtxqrhrgtdoakqnoercdbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025214.6631432-1142-25170633694335/AnsiballZ_stat.py'
Oct 09 15:53:35 compute-0 sudo[19357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:35 compute-0 python3.9[19359]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 15:53:35 compute-0 sudo[19357]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:35 compute-0 sudo[19508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnqgixuxiblxecalnfqchojwzeteeiza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025215.609403-1142-7926208696834/AnsiballZ_copy.py'
Oct 09 15:53:35 compute-0 sudo[19508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:36 compute-0 python3.9[19510]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760025215.609403-1142-7926208696834/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:53:36 compute-0 sudo[19508]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:36 compute-0 sudo[19584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hthjnibtixulrrraxlrrrndksaaxtxtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025215.609403-1142-7926208696834/AnsiballZ_systemd.py'
Oct 09 15:53:36 compute-0 sudo[19584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:36 compute-0 python3.9[19586]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 15:53:36 compute-0 systemd[1]: Reloading.
Oct 09 15:53:36 compute-0 systemd-rc-local-generator[19610]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:53:36 compute-0 systemd-sysv-generator[19613]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:53:36 compute-0 sudo[19584]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:37 compute-0 sudo[19695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivifiqooxfsijdrquwkaumsrqmawiqmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025215.609403-1142-7926208696834/AnsiballZ_systemd.py'
Oct 09 15:53:37 compute-0 sudo[19695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:37 compute-0 python3.9[19697]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 15:53:37 compute-0 systemd[1]: Reloading.
Oct 09 15:53:37 compute-0 systemd-sysv-generator[19727]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:53:37 compute-0 systemd-rc-local-generator[19722]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:53:37 compute-0 systemd[1]: Starting ovn_controller container...
Oct 09 15:53:37 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 09 15:53:37 compute-0 systemd[1]: Started libcrun container.
Oct 09 15:53:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953e86986741ac4a4a022eeac462d113dfc50e4797bc84ed203fd6642a04007b/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 09 15:53:37 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f.
Oct 09 15:53:37 compute-0 podman[19737]: 2025-10-09 15:53:37.881150653 +0000 UTC m=+0.148205728 container init d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 09 15:53:37 compute-0 podman[19737]: 2025-10-09 15:53:37.907790767 +0000 UTC m=+0.174845812 container start d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4)
Oct 09 15:53:37 compute-0 edpm-start-podman-container[19737]: ovn_controller
Oct 09 15:53:37 compute-0 ovn_controller[19752]: + sudo -E kolla_set_configs
Oct 09 15:53:37 compute-0 edpm-start-podman-container[19736]: Creating additional drop-in dependency for "ovn_controller" (d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f)
Oct 09 15:53:37 compute-0 systemd[1]: Reloading.
Oct 09 15:53:38 compute-0 podman[19757]: 2025-10-09 15:53:38.04152265 +0000 UTC m=+0.124108635 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, container_name=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 09 15:53:38 compute-0 systemd-rc-local-generator[19826]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:53:38 compute-0 systemd-sysv-generator[19830]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:53:38 compute-0 systemd[1]: Started ovn_controller container.
Oct 09 15:53:38 compute-0 systemd[1]: d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f-5a978ce9f876958d.service: Main process exited, code=exited, status=1/FAILURE
Oct 09 15:53:38 compute-0 systemd[1]: d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f-5a978ce9f876958d.service: Failed with result 'exit-code'.
Oct 09 15:53:38 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 09 15:53:38 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 09 15:53:38 compute-0 sudo[19695]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:38 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 09 15:53:38 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 09 15:53:38 compute-0 systemd[19839]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 09 15:53:38 compute-0 systemd[19839]: Queued start job for default target Main User Target.
Oct 09 15:53:38 compute-0 systemd[19839]: Created slice User Application Slice.
Oct 09 15:53:38 compute-0 systemd[19839]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 09 15:53:38 compute-0 systemd[19839]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 15:53:38 compute-0 systemd[19839]: Reached target Paths.
Oct 09 15:53:38 compute-0 systemd[19839]: Reached target Timers.
Oct 09 15:53:38 compute-0 systemd[19839]: Starting D-Bus User Message Bus Socket...
Oct 09 15:53:38 compute-0 systemd[19839]: Starting Create User's Volatile Files and Directories...
Oct 09 15:53:38 compute-0 systemd[19839]: Finished Create User's Volatile Files and Directories.
Oct 09 15:53:38 compute-0 systemd[19839]: Listening on D-Bus User Message Bus Socket.
Oct 09 15:53:38 compute-0 systemd[19839]: Reached target Sockets.
Oct 09 15:53:38 compute-0 systemd[19839]: Reached target Basic System.
Oct 09 15:53:38 compute-0 systemd[19839]: Reached target Main User Target.
Oct 09 15:53:38 compute-0 systemd[19839]: Startup finished in 110ms.
Oct 09 15:53:38 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 09 15:53:38 compute-0 systemd[1]: Started Session c1 of User root.
Oct 09 15:53:38 compute-0 ovn_controller[19752]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 15:53:38 compute-0 ovn_controller[19752]: INFO:__main__:Validating config file
Oct 09 15:53:38 compute-0 ovn_controller[19752]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 15:53:38 compute-0 ovn_controller[19752]: INFO:__main__:Writing out command to execute
Oct 09 15:53:38 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 09 15:53:38 compute-0 ovn_controller[19752]: ++ cat /run_command
Oct 09 15:53:38 compute-0 ovn_controller[19752]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 09 15:53:38 compute-0 ovn_controller[19752]: + ARGS=
Oct 09 15:53:38 compute-0 ovn_controller[19752]: + sudo kolla_copy_cacerts
Oct 09 15:53:38 compute-0 systemd[1]: Started Session c2 of User root.
Oct 09 15:53:38 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 09 15:53:38 compute-0 ovn_controller[19752]: + [[ ! -n '' ]]
Oct 09 15:53:38 compute-0 ovn_controller[19752]: + . kolla_extend_start
Oct 09 15:53:38 compute-0 ovn_controller[19752]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 09 15:53:38 compute-0 ovn_controller[19752]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 09 15:53:38 compute-0 ovn_controller[19752]: + umask 0022
Oct 09 15:53:38 compute-0 ovn_controller[19752]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 09 15:53:38 compute-0 ovn_controller[19752]: 2025-10-09T15:53:38Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 09 15:53:38 compute-0 ovn_controller[19752]: 2025-10-09T15:53:38Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 09 15:53:38 compute-0 ovn_controller[19752]: 2025-10-09T15:53:38Z|00003|main|INFO|OVN internal version is : [24.09.4-20.37.0-77.8]
Oct 09 15:53:38 compute-0 ovn_controller[19752]: 2025-10-09T15:53:38Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 09 15:53:38 compute-0 ovn_controller[19752]: 2025-10-09T15:53:38Z|00005|stream_ssl|ERR|ssl:ovsdbserver-sb.openstack.svc:6642: connect: Address family not supported by protocol
Oct 09 15:53:38 compute-0 ovn_controller[19752]: 2025-10-09T15:53:38Z|00006|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 09 15:53:38 compute-0 ovn_controller[19752]: 2025-10-09T15:53:38Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connection attempt failed (Address family not supported by protocol)
Oct 09 15:53:38 compute-0 ovn_controller[19752]: 2025-10-09T15:53:38Z|00008|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 09 15:53:38 compute-0 ovn_controller[19752]: 2025-10-09T15:53:38Z|00009|ovn_util|INFO|statctrl: connecting to switch: "unix:/var/run/openvswitch/br-int.mgmt"
Oct 09 15:53:38 compute-0 ovn_controller[19752]: 2025-10-09T15:53:38Z|00010|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 09 15:53:38 compute-0 ovn_controller[19752]: 2025-10-09T15:53:38Z|00011|rconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: connection failed (No such file or directory)
Oct 09 15:53:38 compute-0 ovn_controller[19752]: 2025-10-09T15:53:38Z|00012|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: waiting 1 seconds before reconnect
Oct 09 15:53:38 compute-0 ovn_controller[19752]: 2025-10-09T15:53:38Z|00013|ovn_util|INFO|pinctrl: connecting to switch: "unix:/var/run/openvswitch/br-int.mgmt"
Oct 09 15:53:38 compute-0 ovn_controller[19752]: 2025-10-09T15:53:38Z|00014|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 09 15:53:38 compute-0 ovn_controller[19752]: 2025-10-09T15:53:38Z|00015|rconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: connection failed (No such file or directory)
Oct 09 15:53:38 compute-0 ovn_controller[19752]: 2025-10-09T15:53:38Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: waiting 1 seconds before reconnect
Oct 09 15:53:38 compute-0 NetworkManager[1028]: <info>  [1760025218.6110] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 09 15:53:38 compute-0 NetworkManager[1028]: <info>  [1760025218.6116] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 15:53:38 compute-0 NetworkManager[1028]: <info>  [1760025218.6125] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Oct 09 15:53:38 compute-0 NetworkManager[1028]: <info>  [1760025218.6128] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Oct 09 15:53:38 compute-0 NetworkManager[1028]: <info>  [1760025218.6130] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 15:53:38 compute-0 kernel: br-int: entered promiscuous mode
Oct 09 15:53:38 compute-0 systemd-udevd[19913]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 15:53:38 compute-0 sudo[20018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxcnmrreklchsdealsxixcjbnyososxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025218.6409411-1198-232017216624787/AnsiballZ_command.py'
Oct 09 15:53:38 compute-0 sudo[20018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:39 compute-0 python3.9[20020]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:53:39 compute-0 ovs-vsctl[20021]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 09 15:53:39 compute-0 sudo[20018]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:39 compute-0 sudo[20171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzuybwsvlqpxwmsqomuvmyjctwrttqlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025219.2765925-1214-18628608011183/AnsiballZ_command.py'
Oct 09 15:53:39 compute-0 sudo[20171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00001|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00001|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00017|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00018|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00019|ovn_util|INFO|features: connecting to switch: "unix:/var/run/openvswitch/br-int.mgmt"
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00021|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00022|features|INFO|OVS Feature: ct_flush, state: supported
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00023|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00024|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00025|main|INFO|OVS feature set changed, force recompute.
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00026|ovn_util|INFO|ofctrl: connecting to switch: "unix:/var/run/openvswitch/br-int.mgmt"
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00027|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00028|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00029|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00030|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00031|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00032|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00033|features|INFO|OVS Feature: meter_support, state: supported
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00034|features|INFO|OVS Feature: group_support, state: supported
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00035|main|INFO|OVS feature set changed, force recompute.
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00036|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 09 15:53:39 compute-0 ovn_controller[19752]: 2025-10-09T15:53:39Z|00037|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 09 15:53:39 compute-0 NetworkManager[1028]: <info>  [1760025219.6509] manager: (ovn-c525cc-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 09 15:53:39 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Oct 09 15:53:39 compute-0 NetworkManager[1028]: <info>  [1760025219.6664] device (genev_sys_6081): carrier: link connected
Oct 09 15:53:39 compute-0 NetworkManager[1028]: <info>  [1760025219.6667] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Oct 09 15:53:39 compute-0 python3.9[20173]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:53:39 compute-0 ovs-vsctl[20180]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 09 15:53:39 compute-0 sudo[20171]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:39 compute-0 NetworkManager[1028]: <info>  [1760025219.8726] manager: (ovn-2bd8bf-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 09 15:53:40 compute-0 sudo[20331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgcogmopsryrdpbnpxelpahaanbfxirs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025220.1079965-1242-89227175936380/AnsiballZ_command.py'
Oct 09 15:53:40 compute-0 sudo[20331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:40 compute-0 python3.9[20333]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:53:40 compute-0 ovs-vsctl[20334]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 09 15:53:40 compute-0 sudo[20331]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:40 compute-0 sshd-session[9249]: Connection closed by 192.168.122.30 port 59054
Oct 09 15:53:40 compute-0 sshd-session[9246]: pam_unix(sshd:session): session closed for user zuul
Oct 09 15:53:41 compute-0 systemd[1]: session-4.scope: Deactivated successfully.
Oct 09 15:53:41 compute-0 systemd[1]: session-4.scope: Consumed 44.765s CPU time.
Oct 09 15:53:41 compute-0 systemd-logind[841]: Session 4 logged out. Waiting for processes to exit.
Oct 09 15:53:41 compute-0 systemd-logind[841]: Removed session 4.
Oct 09 15:53:46 compute-0 sshd-session[20359]: Accepted publickey for zuul from 192.168.122.30 port 37140 ssh2: ECDSA SHA256:2Vdz7kVNDZnmAnEBdeIC9De7MGoQwU7bxSCyJABiYXo
Oct 09 15:53:46 compute-0 systemd-logind[841]: New session 6 of user zuul.
Oct 09 15:53:46 compute-0 systemd[1]: Started Session 6 of User zuul.
Oct 09 15:53:46 compute-0 sshd-session[20359]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 15:53:47 compute-0 python3.9[20512]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 15:53:48 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 09 15:53:48 compute-0 systemd[19839]: Activating special unit Exit the Session...
Oct 09 15:53:48 compute-0 systemd[19839]: Stopped target Main User Target.
Oct 09 15:53:48 compute-0 systemd[19839]: Stopped target Basic System.
Oct 09 15:53:48 compute-0 systemd[19839]: Stopped target Paths.
Oct 09 15:53:48 compute-0 systemd[19839]: Stopped target Sockets.
Oct 09 15:53:48 compute-0 systemd[19839]: Stopped target Timers.
Oct 09 15:53:48 compute-0 systemd[19839]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 09 15:53:48 compute-0 systemd[19839]: Closed D-Bus User Message Bus Socket.
Oct 09 15:53:48 compute-0 systemd[19839]: Stopped Create User's Volatile Files and Directories.
Oct 09 15:53:48 compute-0 systemd[19839]: Removed slice User Application Slice.
Oct 09 15:53:48 compute-0 systemd[19839]: Reached target Shutdown.
Oct 09 15:53:48 compute-0 systemd[19839]: Finished Exit the Session.
Oct 09 15:53:48 compute-0 systemd[19839]: Reached target Exit the Session.
Oct 09 15:53:48 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 09 15:53:48 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 09 15:53:48 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 09 15:53:48 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 09 15:53:48 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 09 15:53:48 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 09 15:53:48 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 09 15:53:48 compute-0 sudo[20667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcyjeanhavcyafepuusyauxhxzrnhblf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025228.2733216-48-130498278750800/AnsiballZ_file.py'
Oct 09 15:53:48 compute-0 sudo[20667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:48 compute-0 python3.9[20670]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:53:48 compute-0 sudo[20667]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:49 compute-0 sudo[20820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zebbdigwidnjxjyialdbsauubqfhwzzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025229.0878212-48-28584634678306/AnsiballZ_file.py'
Oct 09 15:53:49 compute-0 sudo[20820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:49 compute-0 python3.9[20822]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:53:49 compute-0 sudo[20820]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:50 compute-0 sudo[20972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eybmlrwrlakgkawdewghnslwxsqtwulm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025229.719487-48-102954837461418/AnsiballZ_file.py'
Oct 09 15:53:50 compute-0 sudo[20972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:50 compute-0 python3.9[20974]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:53:50 compute-0 sudo[20972]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:50 compute-0 sudo[21124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwaklnzdksgcvctykhpukptisnlgsabr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025230.3874729-48-181314977726508/AnsiballZ_file.py'
Oct 09 15:53:50 compute-0 sudo[21124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:50 compute-0 python3.9[21126]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:53:50 compute-0 sudo[21124]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:51 compute-0 sudo[21276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgtyzrlsiwiaxfdrvozsmpgfpqovdssu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025230.9501-48-155064489079839/AnsiballZ_file.py'
Oct 09 15:53:51 compute-0 sudo[21276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:51 compute-0 python3.9[21278]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:53:51 compute-0 sudo[21276]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:52 compute-0 python3.9[21428]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 15:53:53 compute-0 sudo[21578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdcoyazoipohtjlneijocywfearmotrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025232.5800853-136-234506382907428/AnsiballZ_seboolean.py'
Oct 09 15:53:53 compute-0 sudo[21578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:53 compute-0 python3.9[21580]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 09 15:53:53 compute-0 sudo[21578]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:54 compute-0 python3.9[21730]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:53:55 compute-0 python3.9[21851]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760025234.0778255-152-66801383889727/.source follow=False _original_basename=haproxy.j2 checksum=adfb50215f5396efe78532817f66e82d7b28a68f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:53:55 compute-0 python3.9[22001]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:53:56 compute-0 python3.9[22122]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760025235.4996333-182-202604519089683/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:53:57 compute-0 sudo[22272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcyjwdyuueuqhpzanhmukqlmxspzuvvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025236.8149977-216-137455873242396/AnsiballZ_setup.py'
Oct 09 15:53:57 compute-0 sudo[22272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:57 compute-0 python3.9[22274]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 15:53:57 compute-0 sudo[22272]: pam_unix(sudo:session): session closed for user root
Oct 09 15:53:57 compute-0 sudo[22357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbrdzkvpydczlwgvjpdhjwybwumgmybz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025236.8149977-216-137455873242396/AnsiballZ_dnf.py'
Oct 09 15:53:57 compute-0 sudo[22357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:53:58 compute-0 python3.9[22359]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 15:53:59 compute-0 sudo[22357]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:00 compute-0 sudo[22510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvkfhobgtemrxyhlminwlmeghdxvstfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025239.509212-240-93431087297707/AnsiballZ_systemd.py'
Oct 09 15:54:00 compute-0 sudo[22510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:00 compute-0 python3.9[22512]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 15:54:00 compute-0 sudo[22510]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:01 compute-0 python3.9[22665]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:54:01 compute-0 python3.9[22786]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760025240.6255388-256-70167927487584/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:54:02 compute-0 python3.9[22936]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:54:02 compute-0 python3.9[23057]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760025241.7447097-256-172297576629084/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:54:03 compute-0 python3.9[23207]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:54:04 compute-0 python3.9[23328]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760025243.480584-344-60293250235901/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:54:04 compute-0 python3.9[23478]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:54:05 compute-0 python3.9[23599]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760025244.496891-344-1442371839527/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:54:06 compute-0 python3.9[23749]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 15:54:06 compute-0 sudo[23901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxuqrwqekpgnulsepiimbmvjpbinutja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025246.451526-420-148699999163845/AnsiballZ_file.py'
Oct 09 15:54:06 compute-0 sudo[23901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:06 compute-0 python3.9[23903]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:54:06 compute-0 sudo[23901]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:07 compute-0 sudo[24053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcsupbbhaklgocjpzntnhyusxzdwdhby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025247.174025-436-122390238088078/AnsiballZ_stat.py'
Oct 09 15:54:07 compute-0 sudo[24053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:07 compute-0 python3.9[24055]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:54:07 compute-0 sudo[24053]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:07 compute-0 sudo[24131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhsrhziwcimopgtxmmayscsjmkhtzeic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025247.174025-436-122390238088078/AnsiballZ_file.py'
Oct 09 15:54:07 compute-0 sudo[24131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:08 compute-0 python3.9[24133]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:54:08 compute-0 sudo[24131]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:08 compute-0 sudo[24294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxgevfkfevhgpzlqfzomahbwxvtefxuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025248.1709878-436-103741265469100/AnsiballZ_stat.py'
Oct 09 15:54:08 compute-0 sudo[24294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:08 compute-0 ovn_controller[19752]: 2025-10-09T15:54:08Z|00038|memory|INFO|15992 kB peak resident set size after 29.9 seconds
Oct 09 15:54:08 compute-0 ovn_controller[19752]: 2025-10-09T15:54:08Z|00039|memory|INFO|idl-cells-OVN_Southbound:256 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:6 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Oct 09 15:54:08 compute-0 podman[24257]: 2025-10-09 15:54:08.503250992 +0000 UTC m=+0.090885931 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 09 15:54:08 compute-0 python3.9[24299]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:54:08 compute-0 sudo[24294]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:08 compute-0 sudo[24385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbbaeyuqgghzjaurzxpklgtlxkzqdmyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025248.1709878-436-103741265469100/AnsiballZ_file.py'
Oct 09 15:54:08 compute-0 sudo[24385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:09 compute-0 python3.9[24387]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:54:09 compute-0 sudo[24385]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:09 compute-0 sudo[24537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmykssdzbwqtgsmjzqtzaoiflaxyevia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025249.4558136-482-6473284409494/AnsiballZ_file.py'
Oct 09 15:54:09 compute-0 sudo[24537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:09 compute-0 python3.9[24539]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:54:09 compute-0 sudo[24537]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:10 compute-0 sudo[24689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivncrhejuqpmnfsiravsbrcbnmeryoim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025250.1401503-498-232590740281785/AnsiballZ_stat.py'
Oct 09 15:54:10 compute-0 sudo[24689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:10 compute-0 python3.9[24691]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:54:10 compute-0 sudo[24689]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:10 compute-0 sudo[24767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjmtwjqrfwigefbmxyoywkpbphbqgnyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025250.1401503-498-232590740281785/AnsiballZ_file.py'
Oct 09 15:54:10 compute-0 sudo[24767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:10 compute-0 python3.9[24769]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:54:11 compute-0 sudo[24767]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:11 compute-0 sudo[24919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qavlgroadwtdocobftxuziqzvfjelzju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025251.3278-522-229425824644583/AnsiballZ_stat.py'
Oct 09 15:54:11 compute-0 sudo[24919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:11 compute-0 python3.9[24921]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:54:11 compute-0 sudo[24919]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:11 compute-0 sudo[24997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtnoxylaofcvwcfuzywqrpcihdiytsvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025251.3278-522-229425824644583/AnsiballZ_file.py'
Oct 09 15:54:11 compute-0 sudo[24997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:12 compute-0 python3.9[24999]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:54:12 compute-0 sudo[24997]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:12 compute-0 sudo[25149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjrkzevqgpzwiizruciyycowzmsmtlcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025252.532979-546-228309546276245/AnsiballZ_systemd.py'
Oct 09 15:54:12 compute-0 sudo[25149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:13 compute-0 python3.9[25151]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 15:54:13 compute-0 systemd[1]: Reloading.
Oct 09 15:54:13 compute-0 systemd-rc-local-generator[25174]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:54:13 compute-0 systemd-sysv-generator[25181]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:54:13 compute-0 sudo[25149]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:13 compute-0 sudo[25338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehvfmbvtmthlbhhptuvzzrogkrkiloth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025253.6449263-562-49643365166852/AnsiballZ_stat.py'
Oct 09 15:54:13 compute-0 sudo[25338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:14 compute-0 python3.9[25340]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:54:14 compute-0 sudo[25338]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:14 compute-0 sudo[25416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjdxrrafjmoycsuhgtawjkpqogpehcxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025253.6449263-562-49643365166852/AnsiballZ_file.py'
Oct 09 15:54:14 compute-0 sudo[25416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:14 compute-0 python3.9[25418]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:54:14 compute-0 sudo[25416]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:15 compute-0 sudo[25568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcfxidwmofnpxcumdfjgiilznqaiaowh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025254.886572-586-30002187076491/AnsiballZ_stat.py'
Oct 09 15:54:15 compute-0 sudo[25568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:15 compute-0 python3.9[25570]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:54:15 compute-0 sudo[25568]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:15 compute-0 sudo[25646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twzqgsqmhtbjyxfulliccycwybicyxvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025254.886572-586-30002187076491/AnsiballZ_file.py'
Oct 09 15:54:15 compute-0 sudo[25646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:15 compute-0 python3.9[25648]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:54:15 compute-0 sudo[25646]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:16 compute-0 sudo[25798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdiujoivdyvsygkzeedkswoxuuknicsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025256.0488856-610-111778495030085/AnsiballZ_systemd.py'
Oct 09 15:54:16 compute-0 sudo[25798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:16 compute-0 python3.9[25800]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 15:54:16 compute-0 systemd[1]: Reloading.
Oct 09 15:54:16 compute-0 systemd-rc-local-generator[25827]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:54:16 compute-0 systemd-sysv-generator[25831]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:54:16 compute-0 systemd[1]: Starting Create netns directory...
Oct 09 15:54:16 compute-0 systemd[1309]: Starting Mark boot as successful...
Oct 09 15:54:16 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 09 15:54:16 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 09 15:54:16 compute-0 systemd[1]: Finished Create netns directory.
Oct 09 15:54:16 compute-0 systemd[1309]: Finished Mark boot as successful.
Oct 09 15:54:16 compute-0 sudo[25798]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:17 compute-0 sudo[25992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thdabfheprqtkuxbedukkdladnvzkqwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025257.2558196-630-277776759263676/AnsiballZ_file.py'
Oct 09 15:54:17 compute-0 sudo[25992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:17 compute-0 python3.9[25994]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:54:17 compute-0 sudo[25992]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:18 compute-0 sudo[26144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiyfkkvdxmwvrfcffppebcgtaplwnejf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025257.872152-646-17980348692849/AnsiballZ_stat.py'
Oct 09 15:54:18 compute-0 sudo[26144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:18 compute-0 python3.9[26146]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:54:18 compute-0 sudo[26144]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:18 compute-0 sudo[26267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckxnqtrmfvdqvhbvbkbbvddcygmevvsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025257.872152-646-17980348692849/AnsiballZ_copy.py'
Oct 09 15:54:18 compute-0 sudo[26267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:18 compute-0 python3.9[26269]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760025257.872152-646-17980348692849/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:54:18 compute-0 sudo[26267]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:19 compute-0 sudo[26419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxftdtpmtpjpxekrxysknqpmxrsepwzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025259.328244-680-269664055157900/AnsiballZ_file.py'
Oct 09 15:54:19 compute-0 sudo[26419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:19 compute-0 python3.9[26421]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:54:19 compute-0 sudo[26419]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:20 compute-0 sudo[26571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gytwaxymqzgwuaugkmwgdoybspmggikm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025260.02293-696-205385626561418/AnsiballZ_stat.py'
Oct 09 15:54:20 compute-0 sudo[26571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:20 compute-0 python3.9[26573]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:54:20 compute-0 sudo[26571]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:20 compute-0 sudo[26694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpallxbfdtibogqbjnyzzsnluupicjiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025260.02293-696-205385626561418/AnsiballZ_copy.py'
Oct 09 15:54:20 compute-0 sudo[26694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:20 compute-0 python3.9[26696]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025260.02293-696-205385626561418/.source.json _original_basename=.d9mfs4fa follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:54:20 compute-0 sudo[26694]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:21 compute-0 sudo[26846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dknhetmupalwhkgvdljufpucllotidmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025261.226771-726-96817269526884/AnsiballZ_file.py'
Oct 09 15:54:21 compute-0 sudo[26846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:21 compute-0 python3.9[26848]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:54:21 compute-0 sudo[26846]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:22 compute-0 sudo[26998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzikpmqtgquvldxsqixptwdlcnhlnbet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025261.9381886-742-237579258152516/AnsiballZ_stat.py'
Oct 09 15:54:22 compute-0 sudo[26998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:22 compute-0 sudo[26998]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:22 compute-0 sudo[27121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maonxpuxeyitpuynozxkjfzrtegqlnrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025261.9381886-742-237579258152516/AnsiballZ_copy.py'
Oct 09 15:54:22 compute-0 sudo[27121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:22 compute-0 sudo[27121]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:23 compute-0 sudo[27273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nghzvchbswvktfmjvaahuwzlmtgxxolw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025263.370081-776-183206905375182/AnsiballZ_container_config_data.py'
Oct 09 15:54:23 compute-0 sudo[27273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:23 compute-0 python3.9[27275]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 09 15:54:23 compute-0 sudo[27273]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:24 compute-0 sudo[27425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnjyalczigqqysiqkluyuolnqpkbvdlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025264.2368512-794-186973302656566/AnsiballZ_container_config_hash.py'
Oct 09 15:54:24 compute-0 sudo[27425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:24 compute-0 python3.9[27427]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 15:54:24 compute-0 sudo[27425]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:25 compute-0 sudo[27577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvnyikuappiaiitaanihbxrlhcszqctp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025265.1056042-812-185274297481739/AnsiballZ_podman_container_info.py'
Oct 09 15:54:25 compute-0 sudo[27577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:25 compute-0 python3.9[27579]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 09 15:54:25 compute-0 sudo[27577]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:26 compute-0 sudo[27755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqrlhyxkjijreifshsbkqizvkpvhgvmk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760025266.3910437-838-267667370620604/AnsiballZ_edpm_container_manage.py'
Oct 09 15:54:26 compute-0 sudo[27755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:27 compute-0 python3[27757]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 15:54:27 compute-0 podman[27795]: 2025-10-09 15:54:27.340666633 +0000 UTC m=+0.055156618 container create 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 09 15:54:27 compute-0 podman[27795]: 2025-10-09 15:54:27.308268165 +0000 UTC m=+0.022758170 image pull 4e15c189a95c5dcd1203c76fe304de5c8f8616da9a258ca168cc54a517f49b87 38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 09 15:54:27 compute-0 python3[27757]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z 38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 09 15:54:27 compute-0 sudo[27755]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:27 compute-0 sudo[27984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzjxuujnsmgsmrhgvhkngsiiocieftdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025267.722068-854-75576133051034/AnsiballZ_stat.py'
Oct 09 15:54:27 compute-0 sudo[27984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:28 compute-0 python3.9[27986]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 15:54:28 compute-0 sudo[27984]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:28 compute-0 sudo[28138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqvfgnejkdplvoerzxvrdvwlczufbqvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025268.4819176-872-97657295399818/AnsiballZ_file.py'
Oct 09 15:54:28 compute-0 sudo[28138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:28 compute-0 python3.9[28140]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:54:28 compute-0 sudo[28138]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:29 compute-0 sudo[28214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elcelopixwcyqezahliipbikbfvezcgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025268.4819176-872-97657295399818/AnsiballZ_stat.py'
Oct 09 15:54:29 compute-0 sudo[28214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:29 compute-0 python3.9[28216]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 15:54:29 compute-0 sudo[28214]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:29 compute-0 sudo[28365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okfoqmstembdhuxgbhpekqjgtkeerdag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025269.4665487-872-208270934322813/AnsiballZ_copy.py'
Oct 09 15:54:29 compute-0 sudo[28365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:30 compute-0 python3.9[28367]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760025269.4665487-872-208270934322813/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:54:30 compute-0 sudo[28365]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:30 compute-0 sudo[28441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voojwqhbzvaxeortxbxzfqpyoikebgqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025269.4665487-872-208270934322813/AnsiballZ_systemd.py'
Oct 09 15:54:30 compute-0 sudo[28441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:30 compute-0 python3.9[28443]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 15:54:30 compute-0 systemd[1]: Reloading.
Oct 09 15:54:30 compute-0 systemd-rc-local-generator[28470]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:54:30 compute-0 systemd-sysv-generator[28473]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:54:30 compute-0 sudo[28441]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:31 compute-0 sudo[28551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdahbleuiyzjvvfvaxbcoiyziqgibubh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025269.4665487-872-208270934322813/AnsiballZ_systemd.py'
Oct 09 15:54:31 compute-0 sudo[28551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:31 compute-0 python3.9[28553]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 15:54:31 compute-0 systemd[1]: Reloading.
Oct 09 15:54:31 compute-0 systemd-rc-local-generator[28582]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:54:31 compute-0 systemd-sysv-generator[28585]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:54:31 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Oct 09 15:54:31 compute-0 systemd[1]: Started libcrun container.
Oct 09 15:54:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e91c99095f7d948245c578eda45733259d49099698a87f1a4144f01c7f537d71/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 09 15:54:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e91c99095f7d948245c578eda45733259d49099698a87f1a4144f01c7f537d71/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 15:54:31 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5.
Oct 09 15:54:31 compute-0 podman[28593]: 2025-10-09 15:54:31.770834717 +0000 UTC m=+0.118278202 container init 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: + sudo -E kolla_set_configs
Oct 09 15:54:31 compute-0 podman[28593]: 2025-10-09 15:54:31.796990924 +0000 UTC m=+0.144434399 container start 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 09 15:54:31 compute-0 edpm-start-podman-container[28593]: ovn_metadata_agent
Oct 09 15:54:31 compute-0 edpm-start-podman-container[28592]: Creating additional drop-in dependency for "ovn_metadata_agent" (0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5)
Oct 09 15:54:31 compute-0 podman[28614]: 2025-10-09 15:54:31.856731177 +0000 UTC m=+0.050269168 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: INFO:__main__:Validating config file
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: INFO:__main__:Copying service configuration files
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: INFO:__main__:Writing out command to execute
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 09 15:54:31 compute-0 systemd[1]: Reloading.
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: ++ cat /run_command
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: + CMD=neutron-ovn-metadata-agent
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: + ARGS=
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: + sudo kolla_copy_cacerts
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: + [[ ! -n '' ]]
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: + . kolla_extend_start
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: Running command: 'neutron-ovn-metadata-agent'
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: + umask 0022
Oct 09 15:54:31 compute-0 ovn_metadata_agent[28608]: + exec neutron-ovn-metadata-agent
Oct 09 15:54:31 compute-0 systemd-rc-local-generator[28682]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:54:31 compute-0 systemd-sysv-generator[28686]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:54:32 compute-0 systemd[1]: Started ovn_metadata_agent container.
Oct 09 15:54:32 compute-0 sudo[28551]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:32 compute-0 sshd-session[20362]: Connection closed by 192.168.122.30 port 37140
Oct 09 15:54:32 compute-0 sshd-session[20359]: pam_unix(sshd:session): session closed for user zuul
Oct 09 15:54:32 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Oct 09 15:54:32 compute-0 systemd[1]: session-6.scope: Consumed 32.595s CPU time.
Oct 09 15:54:32 compute-0 systemd-logind[841]: Session 6 logged out. Waiting for processes to exit.
Oct 09 15:54:32 compute-0 systemd-logind[841]: Removed session 6.
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.225 28613 INFO neutron.common.config [-] Logging enabled!
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.225 28613 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 26.1.0.dev268
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.225 28613 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.12/site-packages/neutron/common/config.py:124
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.225 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.225 28613 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.226 28613 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.226 28613 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.226 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.226 28613 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.226 28613 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.226 28613 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.226 28613 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.226 28613 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.226 28613 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.226 28613 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.226 28613 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.226 28613 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.226 28613 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.227 28613 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.227 28613 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.227 28613 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.227 28613 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.227 28613 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.227 28613 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.227 28613 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.227 28613 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.227 28613 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.227 28613 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.227 28613 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.227 28613 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.227 28613 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.228 28613 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.228 28613 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.228 28613 DEBUG neutron.agent.ovn.metadata_agent [-] enable_signals                 = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.228 28613 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.228 28613 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.228 28613 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.228 28613 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.228 28613 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.228 28613 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.228 28613 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.228 28613 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.228 28613 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.228 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.229 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.229 28613 DEBUG neutron.agent.ovn.metadata_agent [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.229 28613 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.229 28613 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.229 28613 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.229 28613 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.229 28613 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.229 28613 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.229 28613 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.229 28613 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.229 28613 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.229 28613 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.229 28613 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.229 28613 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.229 28613 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.230 28613 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.230 28613 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.230 28613 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.230 28613 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.230 28613 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.230 28613 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.230 28613 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.230 28613 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.230 28613 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.230 28613 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.230 28613 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.230 28613 DEBUG neutron.agent.ovn.metadata_agent [-] my_ip                          = 38.102.83.110 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.230 28613 DEBUG neutron.agent.ovn.metadata_agent [-] my_ipv6                        = ::1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.230 28613 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.231 28613 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.231 28613 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.231 28613 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.231 28613 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.231 28613 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.231 28613 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.231 28613 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.231 28613 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.231 28613 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.231 28613 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.231 28613 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.231 28613 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.231 28613 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.232 28613 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.232 28613 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.232 28613 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.232 28613 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.232 28613 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.232 28613 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.232 28613 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.232 28613 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.232 28613 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.232 28613 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.232 28613 DEBUG neutron.agent.ovn.metadata_agent [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.232 28613 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.232 28613 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.233 28613 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.233 28613 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.233 28613 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.233 28613 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.233 28613 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.233 28613 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.233 28613 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.233 28613 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_qinq                      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.233 28613 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.233 28613 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.233 28613 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.233 28613 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.234 28613 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.234 28613 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.234 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.234 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.234 28613 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.234 28613 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.234 28613 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.234 28613 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.234 28613 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.234 28613 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.234 28613 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.234 28613 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.234 28613 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.235 28613 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_requests        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.235 28613 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.235 28613 DEBUG neutron.agent.ovn.metadata_agent [-] profiler_jaeger.process_tags   = {} log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.235 28613 DEBUG neutron.agent.ovn.metadata_agent [-] profiler_jaeger.service_name_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.235 28613 DEBUG neutron.agent.ovn.metadata_agent [-] profiler_otlp.service_name_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.235 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.235 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.235 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.235 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.235 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.235 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.235 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.235 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.235 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.236 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.236 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.236 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.236 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.236 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.236 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.236 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_timeout     = 60.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.236 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.236 28613 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.236 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.236 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.236 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.236 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.log_daemon_traceback   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.236 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.237 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.237 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.237 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.237 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.237 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.237 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.237 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.237 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.237 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.237 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.237 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.237 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.237 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.237 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.237 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.238 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.238 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.238 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.238 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.238 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.238 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.238 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.238 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.238 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.238 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.239 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.239 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.239 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.239 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.239 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.239 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.239 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.239 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.239 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.239 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.239 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.239 28613 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.240 28613 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.240 28613 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.240 28613 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.240 28613 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.240 28613 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.240 28613 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.240 28613 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.240 28613 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.240 28613 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.240 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.240 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mappings            = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.240 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.datapath_type              = system log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.240 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_flood                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.240 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_flood_reports         = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.241 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_flood_unregistered    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.241 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.241 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.int_peer_patch_port        = patch-tun log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.241 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.integration_bridge         = br-int log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.241 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.local_ip                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.241 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.of_connect_timeout         = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.241 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.of_inactivity_probe        = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.241 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.of_listen_address          = 127.0.0.1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.241 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.of_listen_port             = 6633 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.241 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.of_request_timeout         = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.241 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.openflow_processed_per_port = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.241 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.241 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_debug                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.241 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.241 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.qos_meter_bandwidth        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.242 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_bandwidths = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.242 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_default_hypervisor = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.242 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_hypervisors = {} log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.242 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_inventory_defaults = {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0} log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.242 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_packet_processing_inventory_defaults = {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0} log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.242 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_packet_processing_with_direction = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.242 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_packet_processing_without_direction = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.242 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ssl_ca_cert_file           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.242 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ssl_cert_file              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.242 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ssl_key_file               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.242 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.tun_peer_patch_port        = patch-int log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.242 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.tunnel_bridge              = br-tun log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.242 28613 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.vhostuser_socket_dir       = /var/run/openvswitch log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.242 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.243 28613 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.243 28613 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.243 28613 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.243 28613 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.243 28613 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.243 28613 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.243 28613 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.243 28613 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.243 28613 DEBUG neutron.agent.ovn.metadata_agent [-] agent.extensions               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.243 28613 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.243 28613 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.243 28613 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.243 28613 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.243 28613 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.244 28613 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.244 28613 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.244 28613 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.244 28613 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.244 28613 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.244 28613 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.244 28613 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.244 28613 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.244 28613 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.244 28613 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.244 28613 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.244 28613 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.244 28613 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.244 28613 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.245 28613 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.245 28613 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.245 28613 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.245 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.245 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.245 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.245 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.245 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.245 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.245 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.245 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.245 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.245 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.245 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.245 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.246 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.246 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.246 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.246 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.retriable_status_codes  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.246 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.246 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.246 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.246 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.246 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.246 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.246 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.246 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.246 28613 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.246 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.broadcast_arps_to_all_routers = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.247 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.247 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.247 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_records_ovn_owned      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.247 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.247 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.247 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.fdb_age_threshold          = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.247 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.live_migration_activation_strategy = rarp log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.247 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.localnet_learn_fdb         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.247 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.mac_binding_age_threshold  = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.247 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.247 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.247 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.247 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.248 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.248 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.248 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.248 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.248 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = ['tcp:127.0.0.1:6641'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.248 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.248 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_router_indirect_snat   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.248 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.248 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.248 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ['ssl:ovsdbserver-sb.openstack.svc:6642'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.248 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.248 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.249 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.249 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.249 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.249 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.249 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn_nb_global.fdb_removal_limit = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.249 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn_nb_global.ignore_lsp_down  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.249 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovn_nb_global.mac_binding_removal_limit = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.249 28613 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_rate_limiting.base_query_rate_limit = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.249 28613 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_rate_limiting.base_window_duration = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.249 28613 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_rate_limiting.burst_query_rate_limit = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.249 28613 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_rate_limiting.burst_window_duration = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.249 28613 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_rate_limiting.ip_versions = [4] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.249 28613 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_rate_limiting.rate_limit_enabled = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.249 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.250 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.250 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.250 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.250 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.250 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.250 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.250 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.250 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.250 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.250 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.250 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.hostname = compute-0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.250 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.250 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.251 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.251 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.251 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_splay = 0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.251 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.processname = neutron-ovn-metadata-agent log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.251 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.251 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.251 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.251 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.251 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.251 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.251 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.251 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.251 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.251 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.251 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_stream_fanout = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.252 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.252 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_quorum_queue = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.252 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.252 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.252 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.252 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.252 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.252 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.252 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.252 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.use_queue_manager = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.252 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.252 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.252 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.252 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.252 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.253 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.253 28613 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.253 28613 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.260 28613 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.260 28613 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.260 28613 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.260 28613 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.260 28613 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.268 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 9954897f-aa83-45dd-8e84-289816676c2a (UUID: 9954897f-aa83-45dd-8e84-289816676c2a) and ovn bridge br-int. _load_config /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:419
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.287 28613 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.287 28613 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.287 28613 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Port_Binding.logical_port autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.287 28613 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.287 28613 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.290 28613 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.294 28613 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.299 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '9954897f-aa83-45dd-8e84-289816676c2a'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], external_ids={}, name=9954897f-aa83-45dd-8e84-289816676c2a, nb_cfg_timestamp=1760025227652, nb_cfg=1) old= matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 15:54:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.302 28613 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpjcg65636/privsep.sock']
Oct 09 15:54:35 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 09 15:54:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:36.007 28613 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 09 15:54:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:36.007 28613 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpjcg65636/privsep.sock __init__ /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:377
Oct 09 15:54:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.874 28727 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 09 15:54:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.879 28727 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 09 15:54:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.881 28727 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 09 15:54:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:35.882 28727 INFO oslo.privsep.daemon [-] privsep daemon running as pid 28727
Oct 09 15:54:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:36.009 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[ae7f8623-0854-46e8-9507-c76a1dc621ad]: (2,) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 15:54:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:36.474 28727 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 15:54:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:36.474 28727 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 15:54:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:36.475 28727 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 15:54:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:36.937 28727 INFO oslo_service.backend [-] Loading backend: eventlet
Oct 09 15:54:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:36.950 28727 INFO oslo_service.backend [-] Backend 'eventlet' successfully loaded and cached.
Oct 09 15:54:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:36.997 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[6e0ff857-03df-4788-b2a0-31bea94e2120]: (4, []) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 15:54:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:36.998 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, column=external_ids, values=({'neutron:ovn-metadata-id': '49c6caf4-4208-55bb-be99-4da185cb43f8'},)) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 15:54:37 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:37.016 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 15:54:37 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:54:37.022 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '1'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 15:54:37 compute-0 sshd-session[28732]: Accepted publickey for zuul from 192.168.122.30 port 35488 ssh2: ECDSA SHA256:2Vdz7kVNDZnmAnEBdeIC9De7MGoQwU7bxSCyJABiYXo
Oct 09 15:54:37 compute-0 systemd-logind[841]: New session 7 of user zuul.
Oct 09 15:54:37 compute-0 systemd[1]: Started Session 7 of User zuul.
Oct 09 15:54:37 compute-0 sshd-session[28732]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 15:54:38 compute-0 python3.9[28885]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 15:54:38 compute-0 podman[28886]: 2025-10-09 15:54:38.919286182 +0000 UTC m=+0.144779771 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4)
Oct 09 15:54:39 compute-0 sudo[29065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmnekadkvshewrfqwwkjiepylsoznngh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025279.357447-48-23305991839726/AnsiballZ_command.py'
Oct 09 15:54:39 compute-0 sudo[29065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:39 compute-0 python3.9[29067]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:54:39 compute-0 sudo[29065]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:40 compute-0 sudo[29230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbbqfqexzdulcaivqtqemhfdgqyjeuky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025280.3392937-70-14865036195305/AnsiballZ_systemd_service.py'
Oct 09 15:54:40 compute-0 sudo[29230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:41 compute-0 python3.9[29232]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 15:54:41 compute-0 systemd[1]: Reloading.
Oct 09 15:54:41 compute-0 systemd-rc-local-generator[29259]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:54:41 compute-0 systemd-sysv-generator[29262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:54:41 compute-0 sudo[29230]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:42 compute-0 python3.9[29416]: ansible-ansible.builtin.service_facts Invoked
Oct 09 15:54:42 compute-0 network[29433]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 09 15:54:42 compute-0 network[29434]: 'network-scripts' will be removed from distribution in near future.
Oct 09 15:54:42 compute-0 network[29435]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 09 15:54:48 compute-0 sudo[29697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yraizdarpzjamupbuvhmgeqkwhohqpyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025287.8779233-108-262414611096737/AnsiballZ_systemd_service.py'
Oct 09 15:54:48 compute-0 sudo[29697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:48 compute-0 python3.9[29699]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 15:54:48 compute-0 sudo[29697]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:48 compute-0 sudo[29850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rywjaoqyxmmqxxmokiwdrkvgsqrjrcog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025288.6921992-108-38446526409223/AnsiballZ_systemd_service.py'
Oct 09 15:54:48 compute-0 sudo[29850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:49 compute-0 python3.9[29852]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 15:54:49 compute-0 sudo[29850]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:49 compute-0 sudo[30003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjwuaddrorrmrgijslmrnvwsibxsxkcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025289.3565824-108-23295499019755/AnsiballZ_systemd_service.py'
Oct 09 15:54:49 compute-0 sudo[30003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:49 compute-0 python3.9[30005]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 15:54:49 compute-0 sudo[30003]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:50 compute-0 sudo[30156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjjaiplvusaduizzfcmjxsqcnmgajvlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025290.0319107-108-20578100985270/AnsiballZ_systemd_service.py'
Oct 09 15:54:50 compute-0 sudo[30156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:50 compute-0 python3.9[30158]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 15:54:50 compute-0 sudo[30156]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:50 compute-0 sudo[30309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvhkkfkcwykapmzybrbvtkkogywaqndp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025290.712738-108-169110190493736/AnsiballZ_systemd_service.py'
Oct 09 15:54:50 compute-0 sudo[30309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:51 compute-0 python3.9[30311]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 15:54:51 compute-0 sudo[30309]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:51 compute-0 sudo[30462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faelbaodjqjlbhyleuywqljoskblbpzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025291.4138136-108-127452375943991/AnsiballZ_systemd_service.py'
Oct 09 15:54:51 compute-0 sudo[30462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:51 compute-0 python3.9[30464]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 15:54:51 compute-0 sudo[30462]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:52 compute-0 sudo[30615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gadiowrwlrrmlqgexhdpnjiezpljkipd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025292.0558772-108-182981396746691/AnsiballZ_systemd_service.py'
Oct 09 15:54:52 compute-0 sudo[30615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:52 compute-0 python3.9[30617]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 15:54:52 compute-0 sudo[30615]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:53 compute-0 sudo[30768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-venscgzuclxvscvdtwptylvtysgaonpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025293.452318-212-140680926193431/AnsiballZ_file.py'
Oct 09 15:54:53 compute-0 sudo[30768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:54 compute-0 python3.9[30770]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:54:54 compute-0 sudo[30768]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:54 compute-0 sudo[30920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckifwuntbzspeitkdvuvvcbfvozclxyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025294.200444-212-273156315139503/AnsiballZ_file.py'
Oct 09 15:54:54 compute-0 sudo[30920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:54 compute-0 python3.9[30922]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:54:54 compute-0 sudo[30920]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:54 compute-0 sudo[31072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iphvkedrcczxgognakcqwoyjdyuromhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025294.7520108-212-230683688227353/AnsiballZ_file.py'
Oct 09 15:54:54 compute-0 sudo[31072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:55 compute-0 python3.9[31074]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:54:55 compute-0 sudo[31072]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:55 compute-0 sudo[31224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lusgasutudpfxvmfktkwwjfdmfxyweug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025295.290369-212-12251193321992/AnsiballZ_file.py'
Oct 09 15:54:55 compute-0 sudo[31224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:55 compute-0 python3.9[31226]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:54:55 compute-0 sudo[31224]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:56 compute-0 sudo[31376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgpsclgmwiaglfrfrfxiayhtbgnljegv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025295.8244781-212-96037229230499/AnsiballZ_file.py'
Oct 09 15:54:56 compute-0 sudo[31376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:56 compute-0 python3.9[31378]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:54:56 compute-0 sudo[31376]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:56 compute-0 sudo[31528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exgxgupiifiamsvhjxamkiubdvettbsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025296.411757-212-69544071964451/AnsiballZ_file.py'
Oct 09 15:54:56 compute-0 sudo[31528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:56 compute-0 python3.9[31530]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:54:56 compute-0 sudo[31528]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:57 compute-0 sudo[31680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqkcppesjucldvvosbipjiljnomalrip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025296.9660242-212-28870930528393/AnsiballZ_file.py'
Oct 09 15:54:57 compute-0 sudo[31680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:57 compute-0 python3.9[31682]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:54:57 compute-0 sudo[31680]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:58 compute-0 sudo[31832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lripcqtcnttfjlbnwxvugfcjcvfdhchw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025297.9701743-312-58489980064918/AnsiballZ_file.py'
Oct 09 15:54:58 compute-0 sudo[31832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:58 compute-0 python3.9[31834]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:54:58 compute-0 sudo[31832]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:58 compute-0 sudo[31984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqsjshqkhmauaqgckdrfouoqwvzqcwul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025298.589267-312-81099566130575/AnsiballZ_file.py'
Oct 09 15:54:58 compute-0 sudo[31984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:59 compute-0 python3.9[31986]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:54:59 compute-0 sudo[31984]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:59 compute-0 sudo[32136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkffanarzghlwlxsggawdbikfgydysbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025299.2073805-312-128267943971782/AnsiballZ_file.py'
Oct 09 15:54:59 compute-0 sudo[32136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:54:59 compute-0 python3.9[32138]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:54:59 compute-0 sudo[32136]: pam_unix(sudo:session): session closed for user root
Oct 09 15:54:59 compute-0 sudo[32288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqsylgxtebxwowsqezyalzajdijyhkry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025299.766276-312-62373025967482/AnsiballZ_file.py'
Oct 09 15:54:59 compute-0 sudo[32288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:55:00 compute-0 python3.9[32290]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:55:00 compute-0 sudo[32288]: pam_unix(sudo:session): session closed for user root
Oct 09 15:55:00 compute-0 sudo[32440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndksykwwiaetafmskrysnrfmwsdbnjdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025300.330106-312-127906970780189/AnsiballZ_file.py'
Oct 09 15:55:00 compute-0 sudo[32440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:55:00 compute-0 python3.9[32442]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:55:00 compute-0 sudo[32440]: pam_unix(sudo:session): session closed for user root
Oct 09 15:55:01 compute-0 sudo[32592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-digvcwnqiiszmlgegvzeazxawtmuiasa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025300.89302-312-168414360488579/AnsiballZ_file.py'
Oct 09 15:55:01 compute-0 sudo[32592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:55:01 compute-0 python3.9[32594]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:55:01 compute-0 sudo[32592]: pam_unix(sudo:session): session closed for user root
Oct 09 15:55:01 compute-0 sudo[32744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghpzszclmizcyuprvmlmxzjbspciortl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025301.4780278-312-263997943087759/AnsiballZ_file.py'
Oct 09 15:55:01 compute-0 sudo[32744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:55:01 compute-0 python3.9[32746]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:55:01 compute-0 sudo[32744]: pam_unix(sudo:session): session closed for user root
Oct 09 15:55:02 compute-0 sudo[32906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skxbaxyzuitffhrnppglsxmrgqqaktvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025302.401894-414-241052551087330/AnsiballZ_command.py'
Oct 09 15:55:02 compute-0 sudo[32906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:55:02 compute-0 podman[32870]: 2025-10-09 15:55:02.673705674 +0000 UTC m=+0.056991471 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 09 15:55:02 compute-0 python3.9[32918]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                              systemctl disable --now certmonger.service
                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                            fi
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:55:02 compute-0 sudo[32906]: pam_unix(sudo:session): session closed for user root
Oct 09 15:55:03 compute-0 python3.9[33071]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 09 15:55:04 compute-0 sudo[33221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqkoyxrkbkzgqfazwbsbqahgrhmsuqeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025303.972678-450-98188633670800/AnsiballZ_systemd_service.py'
Oct 09 15:55:04 compute-0 sudo[33221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:55:04 compute-0 python3.9[33223]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 15:55:04 compute-0 systemd[1]: Reloading.
Oct 09 15:55:04 compute-0 systemd-rc-local-generator[33250]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:55:04 compute-0 systemd-sysv-generator[33253]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:55:04 compute-0 sudo[33221]: pam_unix(sudo:session): session closed for user root
Oct 09 15:55:05 compute-0 sudo[33408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzufgbswjyydywthudsxklmyzuzifmlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025305.0196147-466-122896422773171/AnsiballZ_command.py'
Oct 09 15:55:05 compute-0 sudo[33408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:55:05 compute-0 python3.9[33410]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:55:05 compute-0 sudo[33408]: pam_unix(sudo:session): session closed for user root
Oct 09 15:55:05 compute-0 sudo[33561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npimyfodaemwdyqcxkwzncuzsswkiatd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025305.672294-466-48514465217545/AnsiballZ_command.py'
Oct 09 15:55:05 compute-0 sudo[33561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:55:06 compute-0 python3.9[33563]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:55:06 compute-0 sudo[33561]: pam_unix(sudo:session): session closed for user root
Oct 09 15:55:06 compute-0 sudo[33714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwqqkihwhahcxszxznyqvduhvmhgakjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025306.2520766-466-114848352692973/AnsiballZ_command.py'
Oct 09 15:55:06 compute-0 sudo[33714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:55:06 compute-0 python3.9[33716]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:55:06 compute-0 sudo[33714]: pam_unix(sudo:session): session closed for user root
Oct 09 15:55:07 compute-0 sudo[33867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybgrjojjsgcuwzurtzzhgliusrhwdxof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025306.9045396-466-121660141821018/AnsiballZ_command.py'
Oct 09 15:55:07 compute-0 sudo[33867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:55:07 compute-0 python3.9[33869]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:55:07 compute-0 sudo[33867]: pam_unix(sudo:session): session closed for user root
Oct 09 15:55:07 compute-0 sudo[34020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uftmluoisfsyslilueqsnzzcaxihzzrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025307.4416196-466-250987684503769/AnsiballZ_command.py'
Oct 09 15:55:07 compute-0 sudo[34020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:55:07 compute-0 python3.9[34022]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:55:07 compute-0 sudo[34020]: pam_unix(sudo:session): session closed for user root
Oct 09 15:55:08 compute-0 sudo[34173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-undrgvqlwutchnftjmiqmmhudogkofkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025308.0189168-466-123749249668561/AnsiballZ_command.py'
Oct 09 15:55:08 compute-0 sudo[34173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:55:08 compute-0 python3.9[34175]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:55:08 compute-0 sudo[34173]: pam_unix(sudo:session): session closed for user root
Oct 09 15:55:08 compute-0 sudo[34326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trdjnrwyvpmvcsoicmucyeflhxzzulso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025308.58565-466-89923374416227/AnsiballZ_command.py'
Oct 09 15:55:08 compute-0 sudo[34326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:55:09 compute-0 python3.9[34328]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:55:09 compute-0 sudo[34326]: pam_unix(sudo:session): session closed for user root
Oct 09 15:55:09 compute-0 podman[34330]: 2025-10-09 15:55:09.168731675 +0000 UTC m=+0.110602503 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ovn_controller)
Oct 09 15:55:10 compute-0 sudo[34503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmmjdzpbktgmflqlikqegawqrokwwrzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025310.0819335-574-158453743851379/AnsiballZ_getent.py'
Oct 09 15:55:10 compute-0 sudo[34503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:55:10 compute-0 python3.9[34505]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 09 15:55:10 compute-0 sudo[34503]: pam_unix(sudo:session): session closed for user root
Oct 09 15:55:11 compute-0 sudo[34656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfgcyzikcbipoelidxdqubzrngpofclq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025310.9014573-590-216676793450110/AnsiballZ_group.py'
Oct 09 15:55:11 compute-0 sudo[34656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:55:11 compute-0 python3.9[34658]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 09 15:55:11 compute-0 groupadd[34659]: group added to /etc/group: name=libvirt, GID=42473
Oct 09 15:55:11 compute-0 groupadd[34659]: group added to /etc/gshadow: name=libvirt
Oct 09 15:55:11 compute-0 groupadd[34659]: new group: name=libvirt, GID=42473
Oct 09 15:55:11 compute-0 sudo[34656]: pam_unix(sudo:session): session closed for user root
Oct 09 15:55:12 compute-0 sudo[34814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kezhlsoznlwutwdfqnuozwzlcdaylowo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025311.8602464-606-136265255028619/AnsiballZ_user.py'
Oct 09 15:55:12 compute-0 sudo[34814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:55:12 compute-0 python3.9[34816]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 09 15:55:12 compute-0 useradd[34818]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Oct 09 15:55:12 compute-0 sudo[34814]: pam_unix(sudo:session): session closed for user root
Oct 09 15:55:13 compute-0 sudo[34974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uklqkpvlagshykujymyryabrxrdzheqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025313.0945904-628-110582571119374/AnsiballZ_setup.py'
Oct 09 15:55:13 compute-0 sudo[34974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:55:13 compute-0 python3.9[34976]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 15:55:13 compute-0 sudo[34974]: pam_unix(sudo:session): session closed for user root
Oct 09 15:55:14 compute-0 sudo[35058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcjavxkdnlqdzkpcgfjqtlsdmangxlfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025313.0945904-628-110582571119374/AnsiballZ_dnf.py'
Oct 09 15:55:14 compute-0 sudo[35058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:55:14 compute-0 python3.9[35060]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 15:55:32 compute-0 podman[35250]: 2025-10-09 15:55:32.84190139 +0000 UTC m=+0.068213856 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 15:55:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:55:35.255 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 15:55:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:55:35.255 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 15:55:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:55:35.255 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 15:55:39 compute-0 podman[35272]: 2025-10-09 15:55:39.861172555 +0000 UTC m=+0.092731318 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 09 15:55:42 compute-0 kernel: SELinux:  Converting 430 SID table entries...
Oct 09 15:55:42 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 09 15:55:42 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 09 15:55:42 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 09 15:55:42 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 09 15:55:42 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 09 15:55:42 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 09 15:55:42 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 09 15:55:52 compute-0 kernel: SELinux:  Converting 430 SID table entries...
Oct 09 15:55:52 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 09 15:55:52 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 09 15:55:52 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 09 15:55:52 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 09 15:55:52 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 09 15:55:52 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 09 15:55:52 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 09 15:56:03 compute-0 dbus-broker-launch[833]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Oct 09 15:56:03 compute-0 podman[35316]: 2025-10-09 15:56:03.86630098 +0000 UTC m=+0.055807831 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 15:56:10 compute-0 podman[39315]: 2025-10-09 15:56:10.862019723 +0000 UTC m=+0.085819675 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest)
Oct 09 15:56:34 compute-0 podman[52110]: 2025-10-09 15:56:34.805417869 +0000 UTC m=+0.042214273 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Oct 09 15:56:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:56:35.256 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 15:56:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:56:35.256 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 15:56:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:56:35.256 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 15:56:36 compute-0 sshd-session[52129]: Invalid user  from 64.62.156.107 port 5853
Oct 09 15:56:39 compute-0 sshd-session[52129]: Connection closed by invalid user  64.62.156.107 port 5853 [preauth]
Oct 09 15:56:41 compute-0 kernel: SELinux:  Converting 431 SID table entries...
Oct 09 15:56:41 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 09 15:56:41 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 09 15:56:41 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 09 15:56:41 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 09 15:56:41 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 09 15:56:41 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 09 15:56:41 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 09 15:56:41 compute-0 dbus-broker-launch[833]: avc:  op=load_policy lsm=selinux seqno=5 res=1
Oct 09 15:56:41 compute-0 podman[52137]: 2025-10-09 15:56:41.909607382 +0000 UTC m=+0.137609635 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 09 15:56:42 compute-0 groupadd[52170]: group added to /etc/group: name=dnsmasq, GID=992
Oct 09 15:56:42 compute-0 groupadd[52170]: group added to /etc/gshadow: name=dnsmasq
Oct 09 15:56:42 compute-0 groupadd[52170]: new group: name=dnsmasq, GID=992
Oct 09 15:56:42 compute-0 useradd[52177]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Oct 09 15:56:42 compute-0 dbus-broker-launch[832]: Noticed file-system modification, trigger reload.
Oct 09 15:56:42 compute-0 dbus-broker-launch[832]: Noticed file-system modification, trigger reload.
Oct 09 15:56:43 compute-0 groupadd[52190]: group added to /etc/group: name=clevis, GID=991
Oct 09 15:56:43 compute-0 groupadd[52190]: group added to /etc/gshadow: name=clevis
Oct 09 15:56:43 compute-0 groupadd[52190]: new group: name=clevis, GID=991
Oct 09 15:56:43 compute-0 useradd[52197]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Oct 09 15:56:43 compute-0 usermod[52207]: add 'clevis' to group 'tss'
Oct 09 15:56:43 compute-0 usermod[52207]: add 'clevis' to shadow group 'tss'
Oct 09 15:56:45 compute-0 polkitd[1187]: Reloading rules
Oct 09 15:56:45 compute-0 polkitd[1187]: Collecting garbage unconditionally...
Oct 09 15:56:45 compute-0 polkitd[1187]: Loading rules from directory /etc/polkit-1/rules.d
Oct 09 15:56:45 compute-0 polkitd[1187]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 09 15:56:45 compute-0 polkitd[1187]: Finished loading, compiling and executing 4 rules
Oct 09 15:56:45 compute-0 polkitd[1187]: Reloading rules
Oct 09 15:56:45 compute-0 polkitd[1187]: Collecting garbage unconditionally...
Oct 09 15:56:45 compute-0 polkitd[1187]: Loading rules from directory /etc/polkit-1/rules.d
Oct 09 15:56:45 compute-0 polkitd[1187]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 09 15:56:45 compute-0 polkitd[1187]: Finished loading, compiling and executing 4 rules
Oct 09 15:56:46 compute-0 groupadd[52394]: group added to /etc/group: name=ceph, GID=167
Oct 09 15:56:46 compute-0 groupadd[52394]: group added to /etc/gshadow: name=ceph
Oct 09 15:56:46 compute-0 groupadd[52394]: new group: name=ceph, GID=167
Oct 09 15:56:46 compute-0 useradd[52400]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Oct 09 15:56:49 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Oct 09 15:56:49 compute-0 sshd[1283]: Received signal 15; terminating.
Oct 09 15:56:49 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Oct 09 15:56:49 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Oct 09 15:56:49 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Oct 09 15:56:49 compute-0 systemd[1]: Stopping sshd-keygen.target...
Oct 09 15:56:49 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 15:56:49 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 15:56:49 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 15:56:49 compute-0 systemd[1]: Reached target sshd-keygen.target.
Oct 09 15:56:49 compute-0 systemd[1]: Starting OpenSSH server daemon...
Oct 09 15:56:49 compute-0 sshd[52903]: Server listening on 0.0.0.0 port 22.
Oct 09 15:56:49 compute-0 sshd[52903]: Server listening on :: port 22.
Oct 09 15:56:49 compute-0 systemd[1]: Started OpenSSH server daemon.
Oct 09 15:56:51 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 09 15:56:51 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 09 15:56:51 compute-0 systemd[1]: Reloading.
Oct 09 15:56:51 compute-0 systemd-rc-local-generator[53159]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:56:51 compute-0 systemd-sysv-generator[53163]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:56:51 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 09 15:56:53 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 09 15:56:53 compute-0 PackageKit[53823]: daemon start
Oct 09 15:56:53 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 09 15:56:54 compute-0 sudo[35058]: pam_unix(sudo:session): session closed for user root
Oct 09 15:56:56 compute-0 sudo[56367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eswekgpzwceddrdbvoajushyzahlwvzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025415.5908709-652-217128516030331/AnsiballZ_systemd.py'
Oct 09 15:56:56 compute-0 sudo[56367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:56:56 compute-0 python3.9[56395]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 15:56:56 compute-0 systemd[1]: Reloading.
Oct 09 15:56:56 compute-0 systemd-rc-local-generator[56975]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:56:56 compute-0 systemd-sysv-generator[56980]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:56:57 compute-0 sudo[56367]: pam_unix(sudo:session): session closed for user root
Oct 09 15:56:58 compute-0 sudo[59093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpctejirsmvrsmkawkepqrjkjvuodqnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025418.034164-652-15689158023875/AnsiballZ_systemd.py'
Oct 09 15:56:58 compute-0 sudo[59093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:56:58 compute-0 python3.9[59129]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 15:56:58 compute-0 systemd[1]: Reloading.
Oct 09 15:56:58 compute-0 systemd-rc-local-generator[59653]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:56:58 compute-0 systemd-sysv-generator[59657]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:56:58 compute-0 sudo[59093]: pam_unix(sudo:session): session closed for user root
Oct 09 15:56:59 compute-0 sudo[60492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaidggcszraktmmiklltqrvwysvfipje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025419.1159425-652-26120761054879/AnsiballZ_systemd.py'
Oct 09 15:56:59 compute-0 sudo[60492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:56:59 compute-0 python3.9[60510]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 15:56:59 compute-0 systemd[1]: Reloading.
Oct 09 15:56:59 compute-0 systemd-rc-local-generator[60990]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:56:59 compute-0 systemd-sysv-generator[60995]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:56:59 compute-0 sudo[60492]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:00 compute-0 sudo[61756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tktadgysgvvldzpaludnxvbnxgldhwvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025420.16458-652-101324597112124/AnsiballZ_systemd.py'
Oct 09 15:57:00 compute-0 sudo[61756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:00 compute-0 python3.9[61771]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 15:57:00 compute-0 systemd[1]: Reloading.
Oct 09 15:57:00 compute-0 systemd-rc-local-generator[62162]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:57:00 compute-0 systemd-sysv-generator[62166]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:57:01 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 09 15:57:01 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 09 15:57:01 compute-0 systemd[1]: man-db-cache-update.service: Consumed 9.504s CPU time.
Oct 09 15:57:01 compute-0 systemd[1]: run-r35047c7e15234f49a17a8d74e0510bed.service: Deactivated successfully.
Oct 09 15:57:01 compute-0 sudo[61756]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:01 compute-0 sudo[62319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvlfobmgferxdgdzwhtgueknzodurfpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025421.2057354-710-221239437675216/AnsiballZ_systemd.py'
Oct 09 15:57:01 compute-0 sudo[62319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:01 compute-0 python3.9[62321]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 15:57:01 compute-0 systemd[1]: Reloading.
Oct 09 15:57:01 compute-0 systemd-sysv-generator[62356]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:57:01 compute-0 systemd-rc-local-generator[62353]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:57:02 compute-0 sudo[62319]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:02 compute-0 sudo[62510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udjtsadllynglvndugotdijqzusmffhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025422.2337112-710-89031698168067/AnsiballZ_systemd.py'
Oct 09 15:57:02 compute-0 sudo[62510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:02 compute-0 python3.9[62512]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 15:57:02 compute-0 systemd[1]: Reloading.
Oct 09 15:57:02 compute-0 systemd-sysv-generator[62547]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:57:02 compute-0 systemd-rc-local-generator[62541]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:57:03 compute-0 sudo[62510]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:03 compute-0 sudo[62700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhqyuxnazdekgilhtwlwyczhmpmvutzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025423.20032-710-35519610062174/AnsiballZ_systemd.py'
Oct 09 15:57:03 compute-0 sudo[62700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:03 compute-0 python3.9[62702]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 15:57:03 compute-0 systemd[1]: Reloading.
Oct 09 15:57:03 compute-0 systemd-rc-local-generator[62730]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:57:03 compute-0 systemd-sysv-generator[62735]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:57:04 compute-0 sudo[62700]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:04 compute-0 sudo[62890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbkixqkraaermafawtitmzgulvtdouzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025424.2375412-710-281545810105/AnsiballZ_systemd.py'
Oct 09 15:57:04 compute-0 sudo[62890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:04 compute-0 python3.9[62892]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 15:57:04 compute-0 sudo[62890]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:04 compute-0 podman[62894]: 2025-10-09 15:57:04.968672479 +0000 UTC m=+0.087121819 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 09 15:57:05 compute-0 sudo[63064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjmzsurryaraxxkcvyqhmtpwybgnbbrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025425.0494156-710-105457469095398/AnsiballZ_systemd.py'
Oct 09 15:57:05 compute-0 sudo[63064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:05 compute-0 python3.9[63066]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 15:57:05 compute-0 systemd[1]: Reloading.
Oct 09 15:57:05 compute-0 systemd-rc-local-generator[63096]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:57:05 compute-0 systemd-sysv-generator[63101]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:57:05 compute-0 sudo[63064]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:06 compute-0 sudo[63254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mruvqlqskyuxqknwgiginyspfnsqbrud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025426.5916078-782-7775021564253/AnsiballZ_systemd.py'
Oct 09 15:57:06 compute-0 sudo[63254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:07 compute-0 python3.9[63256]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 15:57:07 compute-0 systemd[1]: Reloading.
Oct 09 15:57:07 compute-0 systemd-rc-local-generator[63284]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:57:07 compute-0 systemd-sysv-generator[63287]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:57:07 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 09 15:57:07 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 09 15:57:07 compute-0 sudo[63254]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:08 compute-0 sudo[63446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfwwiiyioxjrhmgevdlpcvrncrsinlfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025427.7619312-798-243478078962043/AnsiballZ_systemd.py'
Oct 09 15:57:08 compute-0 sudo[63446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:08 compute-0 python3.9[63448]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 15:57:08 compute-0 sudo[63446]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:08 compute-0 sudo[63601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iurcjezayfzvychbcbfwoumsetvzawqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025428.4861054-798-150703724466048/AnsiballZ_systemd.py'
Oct 09 15:57:08 compute-0 sudo[63601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:09 compute-0 python3.9[63603]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 15:57:09 compute-0 sudo[63601]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:09 compute-0 sudo[63756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnbcrkmluwvgskwugmogdxvuuyizhkfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025429.2357905-798-236967012920979/AnsiballZ_systemd.py'
Oct 09 15:57:09 compute-0 sudo[63756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:09 compute-0 python3.9[63758]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 15:57:09 compute-0 sudo[63756]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:10 compute-0 sudo[63911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtthbnmzkmlqdmlplunsuikkhavbekil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025429.9612467-798-105635313776531/AnsiballZ_systemd.py'
Oct 09 15:57:10 compute-0 sudo[63911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:10 compute-0 python3.9[63913]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 15:57:10 compute-0 sudo[63911]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:10 compute-0 sudo[64066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqrfjthxxtteaerrabalnlhstvuwnxwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025430.6945245-798-127942700050327/AnsiballZ_systemd.py'
Oct 09 15:57:10 compute-0 sudo[64066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:11 compute-0 python3.9[64068]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 15:57:11 compute-0 sudo[64066]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:11 compute-0 sudo[64221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkhsgtcgscnngrmlppgowdqcfphtbuxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025431.4547842-798-156883268247252/AnsiballZ_systemd.py'
Oct 09 15:57:11 compute-0 sudo[64221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:11 compute-0 python3.9[64223]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 15:57:12 compute-0 sudo[64221]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:12 compute-0 podman[64225]: 2025-10-09 15:57:12.124220034 +0000 UTC m=+0.111283713 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0)
Oct 09 15:57:12 compute-0 sudo[64403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxyluvbpkpqxbpzadyxfypvtrbrkemlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025432.1949701-798-147061539044133/AnsiballZ_systemd.py'
Oct 09 15:57:12 compute-0 sudo[64403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:12 compute-0 python3.9[64405]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 15:57:12 compute-0 sudo[64403]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:13 compute-0 sudo[64558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exwjgutgfjkilztsfducjfqqewcfgkog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025432.9656792-798-79157532271446/AnsiballZ_systemd.py'
Oct 09 15:57:13 compute-0 sudo[64558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:13 compute-0 python3.9[64560]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 15:57:13 compute-0 sudo[64558]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:14 compute-0 sudo[64713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-estuphoharkxayazowcoiqkcbiikfnda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025433.8474383-798-243680421499389/AnsiballZ_systemd.py'
Oct 09 15:57:14 compute-0 sudo[64713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:14 compute-0 python3.9[64715]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 15:57:14 compute-0 sudo[64713]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:14 compute-0 sudo[64868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzxnikorheyrvbgxpvwiaejqhuqnvvll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025434.545793-798-80451679662559/AnsiballZ_systemd.py'
Oct 09 15:57:14 compute-0 sudo[64868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:15 compute-0 python3.9[64870]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 15:57:15 compute-0 sudo[64868]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:15 compute-0 sudo[65023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwpoignhwmzjyzrsunfbheflzlaassrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025435.2610915-798-224915177717951/AnsiballZ_systemd.py'
Oct 09 15:57:15 compute-0 sudo[65023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:15 compute-0 python3.9[65025]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 15:57:15 compute-0 sudo[65023]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:16 compute-0 sudo[65178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcbkbxgmrqokkylbtezbasprrewcpgvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025436.00921-798-145941332311751/AnsiballZ_systemd.py'
Oct 09 15:57:16 compute-0 sudo[65178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:16 compute-0 python3.9[65180]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 15:57:16 compute-0 sudo[65178]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:17 compute-0 sudo[65333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oehwbbnkiheexswqxeettheqkkertbez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025436.7825522-798-269777900083530/AnsiballZ_systemd.py'
Oct 09 15:57:17 compute-0 sudo[65333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:17 compute-0 python3.9[65335]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 15:57:17 compute-0 sudo[65333]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:17 compute-0 sudo[65488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcnambqoyeucscjhsbwygqkigavbadox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025437.5762088-798-143623539613945/AnsiballZ_systemd.py'
Oct 09 15:57:17 compute-0 sudo[65488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:18 compute-0 python3.9[65490]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 15:57:18 compute-0 sudo[65488]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:20 compute-0 sudo[65643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfstoourspjralvmxggexdwuueaqgjzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025440.7107096-1002-231496144557670/AnsiballZ_file.py'
Oct 09 15:57:20 compute-0 sudo[65643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:21 compute-0 python3.9[65645]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:57:21 compute-0 sudo[65643]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:21 compute-0 sudo[65795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxhgveamfsgvcdpzniuqxsswrydblfxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025441.310706-1002-103997161744992/AnsiballZ_file.py'
Oct 09 15:57:21 compute-0 sudo[65795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:21 compute-0 python3.9[65797]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:57:21 compute-0 sudo[65795]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:22 compute-0 sudo[65947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txjzpnelhakiyosvmbbzwcjkoxzlenlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025441.9294412-1002-8716786978461/AnsiballZ_file.py'
Oct 09 15:57:22 compute-0 sudo[65947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:22 compute-0 python3.9[65949]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:57:22 compute-0 sudo[65947]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:22 compute-0 sudo[66099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvkbqlmyvdclxlcgmqrfuwkbypfnbgly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025442.5081127-1002-238997674257598/AnsiballZ_file.py'
Oct 09 15:57:22 compute-0 sudo[66099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:22 compute-0 python3.9[66101]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:57:22 compute-0 sudo[66099]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:23 compute-0 sudo[66251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipbpwetjdapibkpuohahlnxxuvtrceqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025443.0772855-1002-60944352525045/AnsiballZ_file.py'
Oct 09 15:57:23 compute-0 sudo[66251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:23 compute-0 python3.9[66253]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:57:23 compute-0 sudo[66251]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:24 compute-0 sudo[66403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frbhfxlekklybizclpovtmosolwzolya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025443.6837845-1002-147958914258873/AnsiballZ_file.py'
Oct 09 15:57:24 compute-0 sudo[66403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:24 compute-0 python3.9[66405]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:57:24 compute-0 sudo[66403]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:25 compute-0 sudo[66555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsjktndqfvpdqmgynpkxixagrcwrmwzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025444.6197145-1088-210642167654684/AnsiballZ_stat.py'
Oct 09 15:57:25 compute-0 sudo[66555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:25 compute-0 python3.9[66557]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:25 compute-0 sudo[66555]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:25 compute-0 sudo[66680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfifxcciyydmkqakkuyufmqfudwyexjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025444.6197145-1088-210642167654684/AnsiballZ_copy.py'
Oct 09 15:57:25 compute-0 sudo[66680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:25 compute-0 python3.9[66682]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760025444.6197145-1088-210642167654684/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:25 compute-0 sudo[66680]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:26 compute-0 sudo[66832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egetxewrlvmooabkllindpsgftuvqyrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025446.0908742-1088-22229079651149/AnsiballZ_stat.py'
Oct 09 15:57:26 compute-0 sudo[66832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:26 compute-0 python3.9[66834]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:26 compute-0 sudo[66832]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:26 compute-0 sudo[66957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orqjxuijovklletfmtupzowtblitqeou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025446.0908742-1088-22229079651149/AnsiballZ_copy.py'
Oct 09 15:57:26 compute-0 sudo[66957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:27 compute-0 python3.9[66959]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760025446.0908742-1088-22229079651149/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:27 compute-0 sudo[66957]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:27 compute-0 sudo[67109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvhrxscbogisdjhowfedhwozstveyozk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025447.280746-1088-251277671241989/AnsiballZ_stat.py'
Oct 09 15:57:27 compute-0 sudo[67109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:27 compute-0 python3.9[67111]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:27 compute-0 sudo[67109]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:28 compute-0 sudo[67234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dckucwwdlfqxhyzhikmlvvagrbjytqnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025447.280746-1088-251277671241989/AnsiballZ_copy.py'
Oct 09 15:57:28 compute-0 sudo[67234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:28 compute-0 python3.9[67236]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760025447.280746-1088-251277671241989/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:28 compute-0 sudo[67234]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:28 compute-0 sudo[67386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxngdgeuvfmpszzqqbfukoqodckzyuag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025448.4151475-1088-273678093952846/AnsiballZ_stat.py'
Oct 09 15:57:28 compute-0 sudo[67386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:28 compute-0 python3.9[67388]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:28 compute-0 sudo[67386]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:29 compute-0 sudo[67511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohpomljkyipaltgeqqrtzxpihmxewwyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025448.4151475-1088-273678093952846/AnsiballZ_copy.py'
Oct 09 15:57:29 compute-0 sudo[67511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:29 compute-0 python3.9[67513]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760025448.4151475-1088-273678093952846/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:29 compute-0 sudo[67511]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:29 compute-0 sudo[67663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryhdopzdlbqvkvshprypljnjdlbgljyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025449.5399258-1088-219907395431954/AnsiballZ_stat.py'
Oct 09 15:57:29 compute-0 sudo[67663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:29 compute-0 python3.9[67665]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:29 compute-0 sudo[67663]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:30 compute-0 sudo[67788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acscsrfrewaraebazxvmncihjnsmakio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025449.5399258-1088-219907395431954/AnsiballZ_copy.py'
Oct 09 15:57:30 compute-0 sudo[67788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:30 compute-0 python3.9[67790]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760025449.5399258-1088-219907395431954/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:30 compute-0 sudo[67788]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:30 compute-0 sudo[67940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apouialakuowbxgkkgbytzjzsyzwysao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025450.5890596-1088-274756053448000/AnsiballZ_stat.py'
Oct 09 15:57:30 compute-0 sudo[67940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:31 compute-0 python3.9[67942]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:31 compute-0 sudo[67940]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:31 compute-0 sudo[68065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hspleqavqqpklfremsqfgqykhtnabwkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025450.5890596-1088-274756053448000/AnsiballZ_copy.py'
Oct 09 15:57:31 compute-0 sudo[68065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:31 compute-0 python3.9[68067]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760025450.5890596-1088-274756053448000/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:31 compute-0 sudo[68065]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:31 compute-0 sudo[68217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktkxwtwgbtmjqhrbzsavqhhfvjqibcmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025451.699482-1088-192980390267637/AnsiballZ_stat.py'
Oct 09 15:57:31 compute-0 sudo[68217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:32 compute-0 python3.9[68219]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:32 compute-0 sudo[68217]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:32 compute-0 sudo[68340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxhofbpglyexvdzxsblzaoqqqtpzmoiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025451.699482-1088-192980390267637/AnsiballZ_copy.py'
Oct 09 15:57:32 compute-0 sudo[68340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:32 compute-0 python3.9[68342]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760025451.699482-1088-192980390267637/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:32 compute-0 sudo[68340]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:32 compute-0 sudo[68492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hifyxikxixbjwmptxhmsptksckospejw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025452.7355628-1088-33219046759034/AnsiballZ_stat.py'
Oct 09 15:57:32 compute-0 sudo[68492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:33 compute-0 python3.9[68494]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:33 compute-0 sudo[68492]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:33 compute-0 sudo[68617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urmshpnunhgsngyqeopudmsonpwxvdjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025452.7355628-1088-33219046759034/AnsiballZ_copy.py'
Oct 09 15:57:33 compute-0 sudo[68617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:33 compute-0 python3.9[68619]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760025452.7355628-1088-33219046759034/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:33 compute-0 sudo[68617]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:35 compute-0 sudo[68783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hddsxcfhihhdxwdiookmgfdegbkhiqhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025454.9207368-1314-14578606775808/AnsiballZ_command.py'
Oct 09 15:57:35 compute-0 sudo[68783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:35 compute-0 podman[68744]: 2025-10-09 15:57:35.181158045 +0000 UTC m=+0.050270299 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 15:57:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:57:35.257 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 15:57:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:57:35.257 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 15:57:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:57:35.258 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 15:57:35 compute-0 python3.9[68791]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 09 15:57:35 compute-0 sudo[68783]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:35 compute-0 systemd[1309]: Created slice User Background Tasks Slice.
Oct 09 15:57:35 compute-0 systemd[1309]: Starting Cleanup of User's Temporary Files and Directories...
Oct 09 15:57:35 compute-0 systemd[1309]: Finished Cleanup of User's Temporary Files and Directories.
Oct 09 15:57:35 compute-0 sudo[68945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbfxzacvmukhkigxzxumdrgvjdguqjgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025455.6060963-1332-210362340521746/AnsiballZ_file.py'
Oct 09 15:57:35 compute-0 sudo[68945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:36 compute-0 python3.9[68947]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:36 compute-0 sudo[68945]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:36 compute-0 sudo[69097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glfwqkuluujgxjktexezccdjlxyplwfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025456.1896994-1332-153092482702745/AnsiballZ_file.py'
Oct 09 15:57:36 compute-0 sudo[69097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:36 compute-0 python3.9[69099]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:36 compute-0 sudo[69097]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:36 compute-0 sudo[69249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjgemtzhuorcobdtksgfklcnyfsaihlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025456.7414587-1332-24944011207569/AnsiballZ_file.py'
Oct 09 15:57:36 compute-0 sudo[69249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:37 compute-0 python3.9[69251]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:37 compute-0 sudo[69249]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:37 compute-0 sudo[69401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdznrqzfcolecsbdctfgkpomqwcdpdio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025457.3105564-1332-253006479474353/AnsiballZ_file.py'
Oct 09 15:57:37 compute-0 sudo[69401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:37 compute-0 python3.9[69403]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:37 compute-0 sudo[69401]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:38 compute-0 sudo[69553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcgttixirkpzlhvotqhjznjscjehyval ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025458.00164-1332-101713499375292/AnsiballZ_file.py'
Oct 09 15:57:38 compute-0 sudo[69553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:38 compute-0 python3.9[69555]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:38 compute-0 sudo[69553]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:38 compute-0 sudo[69705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aemcwemscmpogizotujcoprcafdhkqzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025458.623472-1332-92783245711420/AnsiballZ_file.py'
Oct 09 15:57:38 compute-0 sudo[69705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:39 compute-0 python3.9[69707]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:39 compute-0 sudo[69705]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:39 compute-0 sudo[69857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qafejjszzjoevwdfdfbjaqiqqkbeozfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025459.3356411-1332-72628252058441/AnsiballZ_file.py'
Oct 09 15:57:39 compute-0 sudo[69857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:39 compute-0 python3.9[69859]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:39 compute-0 sudo[69857]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:40 compute-0 sudo[70009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsusgebrqntcfqcqarkdfvgelcphmnsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025459.9592328-1332-164539035356434/AnsiballZ_file.py'
Oct 09 15:57:40 compute-0 sudo[70009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:40 compute-0 python3.9[70011]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:40 compute-0 sudo[70009]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:40 compute-0 sudo[70161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjmarulhugzlfwtgnuapeyetdjimyubk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025460.535281-1332-265934370307189/AnsiballZ_file.py'
Oct 09 15:57:40 compute-0 sudo[70161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:40 compute-0 python3.9[70163]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:40 compute-0 sudo[70161]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:41 compute-0 sudo[70313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjnannhqitthewomzxwzgmwtqgdmzqam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025461.089559-1332-197183395088939/AnsiballZ_file.py'
Oct 09 15:57:41 compute-0 sudo[70313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:41 compute-0 python3.9[70315]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:41 compute-0 sudo[70313]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:41 compute-0 sudo[70465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itczabhgcptxxibnpjgimjxjhqpxxbnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025461.6647108-1332-108053573396049/AnsiballZ_file.py'
Oct 09 15:57:41 compute-0 sudo[70465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:42 compute-0 python3.9[70467]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:42 compute-0 sudo[70465]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:42 compute-0 sudo[70627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyzhhvhdddxgcaqdttsmbcnsjfluxfcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025462.2421274-1332-146948507047741/AnsiballZ_file.py'
Oct 09 15:57:42 compute-0 sudo[70627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:42 compute-0 podman[70591]: 2025-10-09 15:57:42.540306323 +0000 UTC m=+0.080467221 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 09 15:57:42 compute-0 python3.9[70632]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:42 compute-0 sudo[70627]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:43 compute-0 sudo[70794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oquddygrtdbwmjerspoyvugicukfrkgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025462.8062513-1332-70552398222240/AnsiballZ_file.py'
Oct 09 15:57:43 compute-0 sudo[70794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:43 compute-0 python3.9[70796]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:43 compute-0 sudo[70794]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:43 compute-0 sudo[70946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smphwwiqhukvdzpqehdjnksjxotncyge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025463.3786347-1332-135333706849768/AnsiballZ_file.py'
Oct 09 15:57:43 compute-0 sudo[70946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:43 compute-0 python3.9[70948]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:43 compute-0 sudo[70946]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:44 compute-0 sudo[71098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dadwkfmhddrwfwrrghcfvvtzoocfylaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025464.7462451-1530-257405709146078/AnsiballZ_stat.py'
Oct 09 15:57:44 compute-0 sudo[71098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:45 compute-0 python3.9[71100]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:45 compute-0 sudo[71098]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:45 compute-0 sudo[71221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnxrsywczimjlvvenykctwvnpwshxcxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025464.7462451-1530-257405709146078/AnsiballZ_copy.py'
Oct 09 15:57:45 compute-0 sudo[71221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:45 compute-0 python3.9[71223]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025464.7462451-1530-257405709146078/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:45 compute-0 sudo[71221]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:46 compute-0 sudo[71373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqwelrwykhyctglspxvgkevomyjrxgki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025465.8554924-1530-86115517594504/AnsiballZ_stat.py'
Oct 09 15:57:46 compute-0 sudo[71373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:46 compute-0 python3.9[71375]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:46 compute-0 sudo[71373]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:46 compute-0 sudo[71496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anstxtxwhovxdlfswxyhfhcudmlnqeyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025465.8554924-1530-86115517594504/AnsiballZ_copy.py'
Oct 09 15:57:46 compute-0 sudo[71496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:46 compute-0 python3.9[71498]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025465.8554924-1530-86115517594504/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:46 compute-0 sudo[71496]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:47 compute-0 sudo[71648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekbyvjiyhjxazvodbaysjkyvpbdqiumd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025466.9460838-1530-90579908410970/AnsiballZ_stat.py'
Oct 09 15:57:47 compute-0 sudo[71648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:47 compute-0 python3.9[71650]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:47 compute-0 sudo[71648]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:47 compute-0 sudo[71771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nioxvqvdaqjyiqiybfhqekyutwqrmpvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025466.9460838-1530-90579908410970/AnsiballZ_copy.py'
Oct 09 15:57:47 compute-0 sudo[71771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:47 compute-0 python3.9[71773]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025466.9460838-1530-90579908410970/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:47 compute-0 sudo[71771]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:48 compute-0 sudo[71923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kebqcwkmbytipwuzhuqveinedjqnbgym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025468.104126-1530-188175111060053/AnsiballZ_stat.py'
Oct 09 15:57:48 compute-0 sudo[71923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:48 compute-0 python3.9[71925]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:48 compute-0 sudo[71923]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:48 compute-0 sudo[72046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyawhyzaruneyezogtjoahxbnmtyndtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025468.104126-1530-188175111060053/AnsiballZ_copy.py'
Oct 09 15:57:48 compute-0 sudo[72046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:49 compute-0 python3.9[72048]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025468.104126-1530-188175111060053/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:49 compute-0 sudo[72046]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:49 compute-0 sudo[72198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qghbkrgfeenvlxiuofrcylfwxnicvect ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025469.1974227-1530-203433306591134/AnsiballZ_stat.py'
Oct 09 15:57:49 compute-0 sudo[72198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:49 compute-0 python3.9[72200]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:49 compute-0 sudo[72198]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:50 compute-0 sudo[72321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asdvfcuczaucxrqsgzhxzoeulwllfuwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025469.1974227-1530-203433306591134/AnsiballZ_copy.py'
Oct 09 15:57:50 compute-0 sudo[72321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:50 compute-0 python3.9[72323]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025469.1974227-1530-203433306591134/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:50 compute-0 sudo[72321]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:50 compute-0 sudo[72473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irupybgrxnspcpatbkuwmcdrxioifxua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025470.410991-1530-159824267520667/AnsiballZ_stat.py'
Oct 09 15:57:50 compute-0 sudo[72473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:50 compute-0 python3.9[72475]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:50 compute-0 sudo[72473]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:51 compute-0 sudo[72596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eimqmcxdyoijqllzvcdbwmbcszsousbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025470.410991-1530-159824267520667/AnsiballZ_copy.py'
Oct 09 15:57:51 compute-0 sudo[72596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:51 compute-0 python3.9[72598]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025470.410991-1530-159824267520667/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:51 compute-0 sudo[72596]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:51 compute-0 sudo[72748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfsruuvijzlekjudlenkzsqnqtpykrgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025471.448443-1530-56427258123576/AnsiballZ_stat.py'
Oct 09 15:57:51 compute-0 sudo[72748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:51 compute-0 python3.9[72750]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:51 compute-0 sudo[72748]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:52 compute-0 sudo[72871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjkpxxqnvpddoeolnfyhcrhtaewedfch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025471.448443-1530-56427258123576/AnsiballZ_copy.py'
Oct 09 15:57:52 compute-0 sudo[72871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:52 compute-0 python3.9[72873]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025471.448443-1530-56427258123576/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:52 compute-0 sudo[72871]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:52 compute-0 sudo[73023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuptpjzesvajapaovbsctvvrwqjfziuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025472.48105-1530-53997653191576/AnsiballZ_stat.py'
Oct 09 15:57:52 compute-0 sudo[73023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:52 compute-0 python3.9[73025]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:52 compute-0 sudo[73023]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:53 compute-0 sudo[73146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prirhckrziihckclbiizhxfuwmomwrqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025472.48105-1530-53997653191576/AnsiballZ_copy.py'
Oct 09 15:57:53 compute-0 sudo[73146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:53 compute-0 python3.9[73148]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025472.48105-1530-53997653191576/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:53 compute-0 sudo[73146]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:53 compute-0 sudo[73298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckgyrqnlcjuciieuusvakktpmwyzvwdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025473.5686016-1530-71994163198148/AnsiballZ_stat.py'
Oct 09 15:57:53 compute-0 sudo[73298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:54 compute-0 python3.9[73300]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:54 compute-0 sudo[73298]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:54 compute-0 sudo[73421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwgzzzswwehaeyksgddoqrzfptjeagca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025473.5686016-1530-71994163198148/AnsiballZ_copy.py'
Oct 09 15:57:54 compute-0 sudo[73421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:54 compute-0 python3.9[73423]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025473.5686016-1530-71994163198148/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:54 compute-0 sudo[73421]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:54 compute-0 sudo[73573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgrrkfsxxbgyjvywxytydqdmxpjzksqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025474.6762762-1530-150120514282421/AnsiballZ_stat.py'
Oct 09 15:57:54 compute-0 sudo[73573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:55 compute-0 python3.9[73575]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:55 compute-0 sudo[73573]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:55 compute-0 sudo[73696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpckxgdazfdszmgakauiwfhoyibbxevi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025474.6762762-1530-150120514282421/AnsiballZ_copy.py'
Oct 09 15:57:55 compute-0 sudo[73696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:55 compute-0 python3.9[73698]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025474.6762762-1530-150120514282421/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:55 compute-0 sudo[73696]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:56 compute-0 sudo[73848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhihowivjuaiosicqhfxvhgpfvmamwzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025475.7425678-1530-161895115974274/AnsiballZ_stat.py'
Oct 09 15:57:56 compute-0 sudo[73848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:56 compute-0 python3.9[73850]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:56 compute-0 sudo[73848]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:56 compute-0 sudo[73971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cufdvlukmypwhpfgarfjgnqpsxaliwcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025475.7425678-1530-161895115974274/AnsiballZ_copy.py'
Oct 09 15:57:56 compute-0 sudo[73971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:56 compute-0 python3.9[73973]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025475.7425678-1530-161895115974274/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:56 compute-0 sudo[73971]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:57 compute-0 sudo[74123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzrcrnyzdmoisflynswgnipykshxyllb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025476.9708352-1530-47088221387150/AnsiballZ_stat.py'
Oct 09 15:57:57 compute-0 sudo[74123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:57 compute-0 python3.9[74125]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:57 compute-0 sudo[74123]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:57 compute-0 sudo[74246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zssiopjozxpsxqdegstndykkcgsrpqxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025476.9708352-1530-47088221387150/AnsiballZ_copy.py'
Oct 09 15:57:57 compute-0 sudo[74246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:57 compute-0 python3.9[74248]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025476.9708352-1530-47088221387150/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:57 compute-0 sudo[74246]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:58 compute-0 sudo[74398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnuqpvhrhuisooeectspuvxjeomvxmqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025478.0485995-1530-61444629446413/AnsiballZ_stat.py'
Oct 09 15:57:58 compute-0 sudo[74398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:58 compute-0 python3.9[74400]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:58 compute-0 sudo[74398]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:58 compute-0 sudo[74521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obzyypksmplpphtosuqibaicpqkgtarb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025478.0485995-1530-61444629446413/AnsiballZ_copy.py'
Oct 09 15:57:58 compute-0 sudo[74521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:58 compute-0 python3.9[74523]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025478.0485995-1530-61444629446413/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:57:58 compute-0 sudo[74521]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:59 compute-0 sudo[74673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpwpfxdpipdsodjvekriwkhmpaxnqprx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025479.0627525-1530-19786599482152/AnsiballZ_stat.py'
Oct 09 15:57:59 compute-0 sudo[74673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:57:59 compute-0 python3.9[74675]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:57:59 compute-0 sudo[74673]: pam_unix(sudo:session): session closed for user root
Oct 09 15:57:59 compute-0 sudo[74796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jckwmcgjkwdsfrndpwwdgwdrqrynnfcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025479.0627525-1530-19786599482152/AnsiballZ_copy.py'
Oct 09 15:57:59 compute-0 sudo[74796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:00 compute-0 python3.9[74798]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025479.0627525-1530-19786599482152/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:00 compute-0 sudo[74796]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:02 compute-0 python3.9[74948]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:58:03 compute-0 sudo[75101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukzewaeuoobqfjyudigkrvxlihcvsmiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025482.6681972-1942-167662962148904/AnsiballZ_seboolean.py'
Oct 09 15:58:03 compute-0 sudo[75101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:03 compute-0 python3.9[75103]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 09 15:58:04 compute-0 sudo[75101]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:04 compute-0 sudo[75257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oneakekcpqxsaqiudcqybxtkgbdubumn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025484.5839977-1958-261566256843751/AnsiballZ_copy.py'
Oct 09 15:58:04 compute-0 dbus-broker-launch[833]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 09 15:58:04 compute-0 sudo[75257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:05 compute-0 python3.9[75259]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:05 compute-0 sudo[75257]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:05 compute-0 sudo[75420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-allikrwjfpblfdnzmebhcrhyodymurur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025485.194123-1958-223809908923231/AnsiballZ_copy.py'
Oct 09 15:58:05 compute-0 sudo[75420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:05 compute-0 podman[75383]: 2025-10-09 15:58:05.506508809 +0000 UTC m=+0.065467730 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 09 15:58:05 compute-0 python3.9[75428]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:05 compute-0 sudo[75420]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:06 compute-0 sudo[75578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjvnokqwakfzebsswadtluziulsltmlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025485.8931525-1958-30476169298057/AnsiballZ_copy.py'
Oct 09 15:58:06 compute-0 sudo[75578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:06 compute-0 python3.9[75580]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:06 compute-0 sudo[75578]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:06 compute-0 sudo[75730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmirfbpoxrebxfytabvnicihwqvgvjyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025486.6473944-1958-265230851586217/AnsiballZ_copy.py'
Oct 09 15:58:06 compute-0 sudo[75730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:07 compute-0 python3.9[75732]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:07 compute-0 sudo[75730]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:07 compute-0 sudo[75882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fepigqrvvvjaaxakgraqzgygsprbfoxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025487.3354087-1958-41874597644330/AnsiballZ_copy.py'
Oct 09 15:58:07 compute-0 sudo[75882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:07 compute-0 python3.9[75884]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:07 compute-0 sudo[75882]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:08 compute-0 sudo[76034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbmwplidcpmrtvipxajpgmkeqmilygiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025487.9620955-2030-220000465686748/AnsiballZ_copy.py'
Oct 09 15:58:08 compute-0 sudo[76034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:08 compute-0 python3.9[76036]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:08 compute-0 sudo[76034]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:08 compute-0 sudo[76186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekngskzqxcvxqdeyhlclkkqojckcgxty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025488.5185637-2030-269528565582552/AnsiballZ_copy.py'
Oct 09 15:58:08 compute-0 sudo[76186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:08 compute-0 python3.9[76188]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:08 compute-0 sudo[76186]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:09 compute-0 sudo[76338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvlnynnymsljugnqpnkpjrsgftjgihjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025489.1414504-2030-2052546280559/AnsiballZ_copy.py'
Oct 09 15:58:09 compute-0 sudo[76338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:09 compute-0 python3.9[76340]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:09 compute-0 sudo[76338]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:09 compute-0 sudo[76490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peudjsvuwslmljvkvpaoapouqxtdidoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025489.7227635-2030-14981621624099/AnsiballZ_copy.py'
Oct 09 15:58:10 compute-0 sudo[76490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:10 compute-0 python3.9[76492]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:10 compute-0 sudo[76490]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:10 compute-0 sudo[76642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puxewzunbjihrdnpiibhopkijqhstveo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025490.3496997-2030-160858133248523/AnsiballZ_copy.py'
Oct 09 15:58:10 compute-0 sudo[76642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:10 compute-0 python3.9[76644]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:10 compute-0 sudo[76642]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:11 compute-0 sudo[76794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sanpsdnnhwwskzsgbtzidskshfwrexiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025491.2152798-2102-3885330038604/AnsiballZ_systemd.py'
Oct 09 15:58:11 compute-0 sudo[76794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:11 compute-0 python3.9[76796]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 15:58:11 compute-0 systemd[1]: Reloading.
Oct 09 15:58:11 compute-0 systemd-sysv-generator[76821]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:58:11 compute-0 systemd-rc-local-generator[76817]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:58:12 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Oct 09 15:58:12 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Oct 09 15:58:12 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 09 15:58:12 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 09 15:58:12 compute-0 systemd[1]: Starting libvirt logging daemon...
Oct 09 15:58:12 compute-0 systemd[1]: Started libvirt logging daemon.
Oct 09 15:58:12 compute-0 sudo[76794]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:12 compute-0 sudo[76986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccbuoxegihzrzdammuxaqkmdmkffxofg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025492.2673078-2102-176014478285309/AnsiballZ_systemd.py'
Oct 09 15:58:12 compute-0 sudo[76986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:12 compute-0 python3.9[76988]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 15:58:12 compute-0 systemd[1]: Reloading.
Oct 09 15:58:12 compute-0 podman[76989]: 2025-10-09 15:58:12.874286755 +0000 UTC m=+0.104301315 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 09 15:58:12 compute-0 systemd-rc-local-generator[77040]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:58:12 compute-0 systemd-sysv-generator[77044]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:58:13 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 09 15:58:13 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 09 15:58:13 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 09 15:58:13 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 09 15:58:13 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 09 15:58:13 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 09 15:58:13 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 09 15:58:13 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 09 15:58:13 compute-0 sudo[76986]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:13 compute-0 sudo[77228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqhpqxdakobiiqgirygqtcznjkaddwkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025493.302776-2102-100589224690702/AnsiballZ_systemd.py'
Oct 09 15:58:13 compute-0 sudo[77228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:13 compute-0 python3.9[77230]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 15:58:13 compute-0 systemd[1]: Reloading.
Oct 09 15:58:13 compute-0 systemd-sysv-generator[77260]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:58:13 compute-0 systemd-rc-local-generator[77257]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:58:14 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 09 15:58:14 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 09 15:58:14 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 09 15:58:14 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 09 15:58:14 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 09 15:58:14 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 09 15:58:14 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 09 15:58:14 compute-0 sudo[77228]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:14 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 09 15:58:14 compute-0 sudo[77440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgiutzfeiavosbgdlxoxhylfkmqxyiss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025494.3602567-2102-249693341095996/AnsiballZ_systemd.py'
Oct 09 15:58:14 compute-0 sudo[77440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:14 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 09 15:58:14 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 09 15:58:14 compute-0 python3.9[77444]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 15:58:14 compute-0 systemd[1]: Reloading.
Oct 09 15:58:15 compute-0 systemd-rc-local-generator[77474]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:58:15 compute-0 systemd-sysv-generator[77477]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:58:15 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Oct 09 15:58:15 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 09 15:58:15 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 09 15:58:15 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 09 15:58:15 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 09 15:58:15 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 09 15:58:15 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 09 15:58:15 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 09 15:58:15 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 09 15:58:15 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 09 15:58:15 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 09 15:58:15 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 09 15:58:15 compute-0 sudo[77440]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:15 compute-0 setroubleshoot[77266]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 446be4c9-bb15-49e8-8fb1-a90d7d2c4442
Oct 09 15:58:15 compute-0 setroubleshoot[77266]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                 
                                                 *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                 
                                                 If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                 Then turn on full auditing to get path information about the offending file and generate the error again.
                                                 Do
                                                 
                                                 Turn on full auditing
                                                 # auditctl -w /etc/shadow -p w
                                                 Try to recreate AVC. Then execute
                                                 # ausearch -m avc -ts recent
                                                 If you see PATH record check ownership/permissions on file, and fix it,
                                                 otherwise report as a bugzilla.
                                                 
                                                 *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                 
                                                 If you believe that virtlogd should have the dac_read_search capability by default.
                                                 Then you should report this as a bug.
                                                 You can generate a local policy module to allow this access.
                                                 Do
                                                 allow this access for now by executing:
                                                 # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                 # semodule -X 300 -i my-virtlogd.pp
                                                 
Oct 09 15:58:15 compute-0 setroubleshoot[77266]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 446be4c9-bb15-49e8-8fb1-a90d7d2c4442
Oct 09 15:58:15 compute-0 setroubleshoot[77266]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                 
                                                 *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                 
                                                 If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                 Then turn on full auditing to get path information about the offending file and generate the error again.
                                                 Do
                                                 
                                                 Turn on full auditing
                                                 # auditctl -w /etc/shadow -p w
                                                 Try to recreate AVC. Then execute
                                                 # ausearch -m avc -ts recent
                                                 If you see PATH record check ownership/permissions on file, and fix it,
                                                 otherwise report as a bugzilla.
                                                 
                                                 *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                 
                                                 If you believe that virtlogd should have the dac_read_search capability by default.
                                                 Then you should report this as a bug.
                                                 You can generate a local policy module to allow this access.
                                                 Do
                                                 allow this access for now by executing:
                                                 # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                 # semodule -X 300 -i my-virtlogd.pp
                                                 
Oct 09 15:58:15 compute-0 sudo[77659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnlmjgnfyhrrwgmxcnelyhyeaamwztvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025495.4808092-2102-218589093001273/AnsiballZ_systemd.py'
Oct 09 15:58:15 compute-0 sudo[77659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:16 compute-0 python3.9[77661]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 15:58:16 compute-0 systemd[1]: Reloading.
Oct 09 15:58:16 compute-0 systemd-rc-local-generator[77687]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:58:16 compute-0 systemd-sysv-generator[77690]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:58:16 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Oct 09 15:58:16 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Oct 09 15:58:16 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 09 15:58:16 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 09 15:58:16 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 09 15:58:16 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 09 15:58:16 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 09 15:58:16 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 09 15:58:16 compute-0 sudo[77659]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:17 compute-0 sudo[77869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajmaclqwzkpdccmgyohepoxkbdgqspsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025496.9564493-2176-248874835196861/AnsiballZ_file.py'
Oct 09 15:58:17 compute-0 sudo[77869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:17 compute-0 python3.9[77871]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:17 compute-0 sudo[77869]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:17 compute-0 sudo[78021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvonvjxezzswwhoccnhpalfxwpwslvvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025497.690972-2192-119899511644846/AnsiballZ_find.py'
Oct 09 15:58:17 compute-0 sudo[78021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:18 compute-0 python3.9[78023]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 09 15:58:18 compute-0 sudo[78021]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:18 compute-0 sudo[78173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixhtcaqjsulbgmipmbazgrymsnrxubwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025498.5084004-2220-133078191539009/AnsiballZ_stat.py'
Oct 09 15:58:18 compute-0 sudo[78173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:18 compute-0 python3.9[78175]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:58:18 compute-0 rsyslogd[1282]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 15:58:18 compute-0 rsyslogd[1282]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 15:58:18 compute-0 sudo[78173]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:19 compute-0 sudo[78297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgefjrzooihragnvneggpnqrqpdcjmjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025498.5084004-2220-133078191539009/AnsiballZ_copy.py'
Oct 09 15:58:19 compute-0 sudo[78297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:19 compute-0 python3.9[78299]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025498.5084004-2220-133078191539009/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:19 compute-0 sudo[78297]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:20 compute-0 sudo[78449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmcljxxnbenluxqjcezhwvlgddtztrry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025499.7578611-2252-176603043505774/AnsiballZ_file.py'
Oct 09 15:58:20 compute-0 sudo[78449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:20 compute-0 python3.9[78451]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:20 compute-0 sudo[78449]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:20 compute-0 sudo[78601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqilzosaadjifwvhkrfgoooxycpppubd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025500.400872-2268-151528103918984/AnsiballZ_stat.py'
Oct 09 15:58:20 compute-0 sudo[78601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:20 compute-0 python3.9[78603]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:58:20 compute-0 sudo[78601]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:21 compute-0 sudo[78679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idwohseecznvswkpasvgsyefafshupfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025500.400872-2268-151528103918984/AnsiballZ_file.py'
Oct 09 15:58:21 compute-0 sudo[78679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:21 compute-0 python3.9[78681]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:21 compute-0 sudo[78679]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:21 compute-0 sudo[78831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwyzhrtjzfegsghuzohooljptycuxbbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025501.516242-2292-219688982326514/AnsiballZ_stat.py'
Oct 09 15:58:21 compute-0 sudo[78831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:21 compute-0 python3.9[78833]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:58:22 compute-0 sudo[78831]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:22 compute-0 sudo[78909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxbayikyeelrdvsoibhwuzchhexlmuvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025501.516242-2292-219688982326514/AnsiballZ_file.py'
Oct 09 15:58:22 compute-0 sudo[78909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:22 compute-0 python3.9[78911]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ee4yf2ko recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:22 compute-0 sudo[78909]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:22 compute-0 sudo[79061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxixnefwijjdstrreiyrnxxlziorlwfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025502.6321769-2316-125994403221806/AnsiballZ_stat.py'
Oct 09 15:58:22 compute-0 sudo[79061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:23 compute-0 python3.9[79063]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:58:23 compute-0 sudo[79061]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:23 compute-0 sudo[79139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpvvapdniciuuarknsiylyqjeaoubujh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025502.6321769-2316-125994403221806/AnsiballZ_file.py'
Oct 09 15:58:23 compute-0 sudo[79139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:23 compute-0 python3.9[79141]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:23 compute-0 sudo[79139]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:24 compute-0 sudo[79291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqllckanhoqxrsljmdruobfqrgudguyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025503.829595-2342-70006395601053/AnsiballZ_command.py'
Oct 09 15:58:24 compute-0 sudo[79291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:24 compute-0 python3.9[79293]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:58:24 compute-0 sudo[79291]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:25 compute-0 sudo[79444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkexavkfgpyhanvhnsjsrkgntlkqtkpi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760025504.5796392-2358-1044802715683/AnsiballZ_edpm_nftables_from_files.py'
Oct 09 15:58:25 compute-0 sudo[79444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:25 compute-0 python3[79446]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 09 15:58:25 compute-0 sudo[79444]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:25 compute-0 sudo[79596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxaxmvzgtqvdvfklerupxxoxhnuzbsdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025505.4201121-2374-103689138632145/AnsiballZ_stat.py'
Oct 09 15:58:25 compute-0 sudo[79596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:25 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 09 15:58:25 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 09 15:58:25 compute-0 python3.9[79598]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:58:25 compute-0 sudo[79596]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:26 compute-0 sudo[79674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kygxovxwpfzfpdoettarsjbwoifvnitq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025505.4201121-2374-103689138632145/AnsiballZ_file.py'
Oct 09 15:58:26 compute-0 sudo[79674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:26 compute-0 python3.9[79676]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:26 compute-0 sudo[79674]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:26 compute-0 sudo[79826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqzqcvnlyfotkplzfmruyupwzkycajli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025506.5104325-2398-199857091626073/AnsiballZ_stat.py'
Oct 09 15:58:26 compute-0 sudo[79826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:26 compute-0 python3.9[79828]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:58:27 compute-0 sudo[79826]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:27 compute-0 sudo[79904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxxyzomzebtbfjhfyltyofpnlbzyymwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025506.5104325-2398-199857091626073/AnsiballZ_file.py'
Oct 09 15:58:27 compute-0 sudo[79904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:27 compute-0 python3.9[79906]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:27 compute-0 sudo[79904]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:27 compute-0 sudo[80056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sixoecpspupvurfdumsjscirenvjvzih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025507.6549015-2422-16653114350677/AnsiballZ_stat.py'
Oct 09 15:58:27 compute-0 sudo[80056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:28 compute-0 python3.9[80058]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:58:28 compute-0 sudo[80056]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:28 compute-0 sudo[80134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsjhmyixedxcvmbbuwtsgwhwcmgvkbxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025507.6549015-2422-16653114350677/AnsiballZ_file.py'
Oct 09 15:58:28 compute-0 sudo[80134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:28 compute-0 python3.9[80136]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:28 compute-0 sudo[80134]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:28 compute-0 sudo[80286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oughzhyrzdesmiwbosnrjeujimpeqwfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025508.7150419-2446-213365642228128/AnsiballZ_stat.py'
Oct 09 15:58:28 compute-0 sudo[80286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:29 compute-0 python3.9[80288]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:58:29 compute-0 sudo[80286]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:29 compute-0 sudo[80364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psgkjheqatwgklzjqagphrnbhnmioawf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025508.7150419-2446-213365642228128/AnsiballZ_file.py'
Oct 09 15:58:29 compute-0 sudo[80364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:29 compute-0 python3.9[80366]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:29 compute-0 sudo[80364]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:30 compute-0 sudo[80516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jodidckqgbccsdasqtcmjilxkvgrzcbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025509.8587208-2470-212425766407961/AnsiballZ_stat.py'
Oct 09 15:58:30 compute-0 sudo[80516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:30 compute-0 python3.9[80518]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:58:30 compute-0 sudo[80516]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:30 compute-0 sudo[80641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdlwkgjoapsxmewlfggmftobupfeuuok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025509.8587208-2470-212425766407961/AnsiballZ_copy.py'
Oct 09 15:58:30 compute-0 sudo[80641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:31 compute-0 python3.9[80643]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025509.8587208-2470-212425766407961/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:31 compute-0 sudo[80641]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:31 compute-0 sudo[80793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iydmmzftlrkudmmatldbuwvnlatiyyhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025511.3798323-2500-193346448862883/AnsiballZ_file.py'
Oct 09 15:58:31 compute-0 sudo[80793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:31 compute-0 python3.9[80795]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:31 compute-0 sudo[80793]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:32 compute-0 sudo[80945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-favparvcslplqghvtxhpgvfutlvrlsor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025512.030448-2516-104594940228387/AnsiballZ_command.py'
Oct 09 15:58:32 compute-0 sudo[80945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:32 compute-0 python3.9[80947]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:58:32 compute-0 sudo[80945]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:33 compute-0 sudo[81100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujppvsehptqirpqmhdixqhhumyxznxcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025512.7645872-2532-118831196615352/AnsiballZ_blockinfile.py'
Oct 09 15:58:33 compute-0 sudo[81100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:33 compute-0 python3.9[81102]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:33 compute-0 sudo[81100]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:33 compute-0 sudo[81252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgecmekhbcyzrcsebkfiahelkqfjjrjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025513.7183342-2550-235572431882700/AnsiballZ_command.py'
Oct 09 15:58:33 compute-0 sudo[81252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:34 compute-0 python3.9[81254]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:58:34 compute-0 sudo[81252]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:34 compute-0 sudo[81405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvruzlsgpkmbbpjgriqwumwtxrwsrvhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025514.4863832-2566-168878762067638/AnsiballZ_stat.py'
Oct 09 15:58:34 compute-0 sudo[81405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:34 compute-0 python3.9[81407]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 15:58:34 compute-0 sudo[81405]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:58:35.259 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 15:58:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:58:35.260 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 15:58:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:58:35.260 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 15:58:35 compute-0 sudo[81560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njjajvrtagddrjnagxluynkrcaldqeop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025515.1358452-2582-53587666734563/AnsiballZ_command.py'
Oct 09 15:58:35 compute-0 sudo[81560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:35 compute-0 python3.9[81562]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:58:35 compute-0 sudo[81560]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:35 compute-0 podman[81590]: 2025-10-09 15:58:35.827928622 +0000 UTC m=+0.057092841 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 15:58:36 compute-0 sudo[81734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhphijagketkcrrxhoqbgefdsybrfjdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025515.8384717-2598-71569974193359/AnsiballZ_file.py'
Oct 09 15:58:36 compute-0 sudo[81734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:36 compute-0 python3.9[81736]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:36 compute-0 sudo[81734]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:36 compute-0 sudo[81886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qouflokugdlcjyskfdkjcicrkiyswtbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025516.6075575-2614-2423023504799/AnsiballZ_stat.py'
Oct 09 15:58:36 compute-0 sudo[81886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:37 compute-0 python3.9[81888]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:58:37 compute-0 sudo[81886]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:37 compute-0 sudo[82009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbcplbwnwivxnqhtobvxjoclwnncinlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025516.6075575-2614-2423023504799/AnsiballZ_copy.py'
Oct 09 15:58:37 compute-0 sudo[82009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:37 compute-0 python3.9[82011]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025516.6075575-2614-2423023504799/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:37 compute-0 sudo[82009]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:38 compute-0 sudo[82161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqjiwfkkoywyikmuvuubgjgmxypeehiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025517.9535997-2644-64541673537428/AnsiballZ_stat.py'
Oct 09 15:58:38 compute-0 sudo[82161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:38 compute-0 python3.9[82163]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:58:38 compute-0 sudo[82161]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:38 compute-0 sudo[82284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szxcvaksvpvwlguqhajjqbvythonkkyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025517.9535997-2644-64541673537428/AnsiballZ_copy.py'
Oct 09 15:58:38 compute-0 sudo[82284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:39 compute-0 python3.9[82286]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025517.9535997-2644-64541673537428/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:39 compute-0 sudo[82284]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:39 compute-0 sudo[82436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxfowdwdtxstsvlpuwahamekougdxfah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025519.2037897-2674-179858854214707/AnsiballZ_stat.py'
Oct 09 15:58:39 compute-0 sudo[82436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:39 compute-0 python3.9[82438]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:58:39 compute-0 sudo[82436]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:40 compute-0 sudo[82559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaxnkrftbxtjisqorasfeyaxzsxcbinf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025519.2037897-2674-179858854214707/AnsiballZ_copy.py'
Oct 09 15:58:40 compute-0 sudo[82559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:40 compute-0 python3.9[82561]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025519.2037897-2674-179858854214707/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:58:40 compute-0 sudo[82559]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:40 compute-0 sudo[82711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvttqayqudgpkkdgpvvzxkngzwnscdol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025520.4916203-2704-194966013748546/AnsiballZ_systemd.py'
Oct 09 15:58:40 compute-0 sudo[82711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:41 compute-0 python3.9[82713]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 15:58:41 compute-0 systemd[1]: Reloading.
Oct 09 15:58:41 compute-0 systemd-rc-local-generator[82738]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:58:41 compute-0 systemd-sysv-generator[82744]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:58:41 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Oct 09 15:58:41 compute-0 sudo[82711]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:41 compute-0 sudo[82902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omjkbzgpeigzdgplcqyhscvvrmpueacj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025521.6413424-2720-116549745335870/AnsiballZ_systemd.py'
Oct 09 15:58:41 compute-0 sudo[82902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:42 compute-0 python3.9[82904]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 09 15:58:42 compute-0 systemd[1]: Reloading.
Oct 09 15:58:42 compute-0 systemd-rc-local-generator[82928]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:58:42 compute-0 systemd-sysv-generator[82934]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:58:42 compute-0 systemd[1]: Reloading.
Oct 09 15:58:42 compute-0 systemd-rc-local-generator[82966]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:58:42 compute-0 systemd-sysv-generator[82969]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:58:42 compute-0 sudo[82902]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:43 compute-0 sshd-session[28735]: Connection closed by 192.168.122.30 port 35488
Oct 09 15:58:43 compute-0 sshd-session[28732]: pam_unix(sshd:session): session closed for user zuul
Oct 09 15:58:43 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Oct 09 15:58:43 compute-0 systemd[1]: session-7.scope: Consumed 3min 11.970s CPU time.
Oct 09 15:58:43 compute-0 systemd-logind[841]: Session 7 logged out. Waiting for processes to exit.
Oct 09 15:58:43 compute-0 systemd-logind[841]: Removed session 7.
Oct 09 15:58:43 compute-0 podman[83001]: 2025-10-09 15:58:43.377331276 +0000 UTC m=+0.088244967 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, container_name=ovn_controller)
Oct 09 15:58:48 compute-0 sshd-session[83027]: Accepted publickey for zuul from 192.168.122.30 port 54682 ssh2: ECDSA SHA256:2Vdz7kVNDZnmAnEBdeIC9De7MGoQwU7bxSCyJABiYXo
Oct 09 15:58:48 compute-0 systemd-logind[841]: New session 8 of user zuul.
Oct 09 15:58:48 compute-0 systemd[1]: Started Session 8 of User zuul.
Oct 09 15:58:48 compute-0 sshd-session[83027]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 15:58:49 compute-0 python3.9[83180]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 15:58:50 compute-0 sudo[83334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kexifznrrcdvwnqsfrfrndkttqxhqdgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025529.7538283-48-279247389663265/AnsiballZ_file.py'
Oct 09 15:58:50 compute-0 sudo[83334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:50 compute-0 python3.9[83336]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:58:50 compute-0 sudo[83334]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:50 compute-0 sudo[83486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opdzajfxrrzralwalthiqfaouyhdvpap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025530.6031156-48-71968136289574/AnsiballZ_file.py'
Oct 09 15:58:50 compute-0 sudo[83486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:51 compute-0 python3.9[83488]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:58:51 compute-0 sudo[83486]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:51 compute-0 sudo[83638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcsoykbkfwvclxmgnatqalbetpeoxbbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025531.2352686-48-88405845800879/AnsiballZ_file.py'
Oct 09 15:58:51 compute-0 sudo[83638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:51 compute-0 python3.9[83640]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:58:51 compute-0 sudo[83638]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:52 compute-0 sudo[83790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgwchpnopthgovjimphhvyidxrzmtgid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025531.8537986-48-112755287987963/AnsiballZ_file.py'
Oct 09 15:58:52 compute-0 sudo[83790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:52 compute-0 python3.9[83792]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 09 15:58:52 compute-0 sudo[83790]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:52 compute-0 sudo[83942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agiieieyjqtasyhfhkpiswuejdciihdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025532.531854-48-265241058952520/AnsiballZ_file.py'
Oct 09 15:58:52 compute-0 sudo[83942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:53 compute-0 python3.9[83944]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:58:53 compute-0 sudo[83942]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:53 compute-0 sudo[84094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtpdijdmhdkkwzmmkaylairogrhdwtph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025533.2016494-120-66831909185073/AnsiballZ_stat.py'
Oct 09 15:58:53 compute-0 sudo[84094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:53 compute-0 python3.9[84096]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 15:58:53 compute-0 sudo[84094]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:54 compute-0 sudo[84248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxsgrolnlvdmlogjdcgevsbyglyyhkie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025534.0525403-136-68279686497233/AnsiballZ_systemd.py'
Oct 09 15:58:54 compute-0 sudo[84248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:55 compute-0 python3.9[84250]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 15:58:55 compute-0 systemd[1]: Reloading.
Oct 09 15:58:55 compute-0 systemd-rc-local-generator[84275]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:58:55 compute-0 systemd-sysv-generator[84280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:58:55 compute-0 sudo[84248]: pam_unix(sudo:session): session closed for user root
Oct 09 15:58:56 compute-0 sudo[84437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxhyjxjchloiqeqajefjoykuqbghkmdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025535.5965574-152-30661835305491/AnsiballZ_service_facts.py'
Oct 09 15:58:56 compute-0 sudo[84437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:58:56 compute-0 python3.9[84439]: ansible-ansible.builtin.service_facts Invoked
Oct 09 15:58:56 compute-0 network[84456]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 09 15:58:56 compute-0 network[84457]: 'network-scripts' will be removed from distribution in near future.
Oct 09 15:58:56 compute-0 network[84458]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 09 15:59:01 compute-0 sudo[84437]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:01 compute-0 sudo[84729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsyfaqykgkdwvktgpjewnaehivndykqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025541.5322247-168-213353358028263/AnsiballZ_systemd.py'
Oct 09 15:59:01 compute-0 sudo[84729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:02 compute-0 python3.9[84731]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 15:59:02 compute-0 systemd[1]: Reloading.
Oct 09 15:59:02 compute-0 systemd-sysv-generator[84761]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:59:02 compute-0 systemd-rc-local-generator[84756]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:59:02 compute-0 sudo[84729]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:03 compute-0 python3.9[84918]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 15:59:03 compute-0 sudo[85068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbkqghqbjegpglllbzsmrswjfuhmchmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025543.4350073-202-195603893347040/AnsiballZ_podman_container.py'
Oct 09 15:59:03 compute-0 sudo[85068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:04 compute-0 python3.9[85070]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 09 15:59:04 compute-0 podman[85104]: 2025-10-09 15:59:04.431545083 +0000 UTC m=+0.071602941 container create 8bd3bce4b9d45ad6492219fce820e7d87ab681f70dac3de1993b8dc03534b069 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid_config, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 09 15:59:04 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 09 15:59:04 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 09 15:59:04 compute-0 kernel: veth0: entered allmulticast mode
Oct 09 15:59:04 compute-0 kernel: veth0: entered promiscuous mode
Oct 09 15:59:04 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 09 15:59:04 compute-0 kernel: podman0: port 1(veth0) entered forwarding state
Oct 09 15:59:04 compute-0 podman[85104]: 2025-10-09 15:59:04.381284834 +0000 UTC m=+0.021342692 image pull 350c9f95f5081e17dae32f2e34a10922b473e3eeecb503bbb4233dbe58df2c45 38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest
Oct 09 15:59:04 compute-0 rsyslogd[1282]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 15:59:04 compute-0 NetworkManager[1028]: <info>  [1760025544.4833] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/20)
Oct 09 15:59:04 compute-0 rsyslogd[1282]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 15:59:04 compute-0 NetworkManager[1028]: <info>  [1760025544.4979] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Oct 09 15:59:04 compute-0 rsyslogd[1282]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 15:59:04 compute-0 NetworkManager[1028]: <info>  [1760025544.5004] device (veth0): carrier: link connected
Oct 09 15:59:04 compute-0 NetworkManager[1028]: <info>  [1760025544.5007] device (podman0): carrier: link connected
Oct 09 15:59:04 compute-0 systemd-udevd[85131]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 15:59:04 compute-0 systemd-udevd[85134]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 15:59:04 compute-0 NetworkManager[1028]: <info>  [1760025544.5315] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 15:59:04 compute-0 NetworkManager[1028]: <info>  [1760025544.5325] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 09 15:59:04 compute-0 NetworkManager[1028]: <info>  [1760025544.5335] device (podman0): Activation: starting connection 'podman0' (07435d8c-e68d-4650-b0f0-bd63d0efff78)
Oct 09 15:59:04 compute-0 NetworkManager[1028]: <info>  [1760025544.5337] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 09 15:59:04 compute-0 NetworkManager[1028]: <info>  [1760025544.5339] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 09 15:59:04 compute-0 NetworkManager[1028]: <info>  [1760025544.5341] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 09 15:59:04 compute-0 NetworkManager[1028]: <info>  [1760025544.5344] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 09 15:59:04 compute-0 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 09 15:59:04 compute-0 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 09 15:59:04 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 09 15:59:04 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 09 15:59:04 compute-0 NetworkManager[1028]: <info>  [1760025544.5680] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 09 15:59:04 compute-0 NetworkManager[1028]: <info>  [1760025544.5684] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 09 15:59:04 compute-0 NetworkManager[1028]: <info>  [1760025544.5692] device (podman0): Activation: successful, device activated.
Oct 09 15:59:04 compute-0 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 09 15:59:04 compute-0 systemd[1]: Started libpod-conmon-8bd3bce4b9d45ad6492219fce820e7d87ab681f70dac3de1993b8dc03534b069.scope.
Oct 09 15:59:04 compute-0 systemd[1]: Started libcrun container.
Oct 09 15:59:04 compute-0 podman[85104]: 2025-10-09 15:59:04.929358727 +0000 UTC m=+0.569416585 container init 8bd3bce4b9d45ad6492219fce820e7d87ab681f70dac3de1993b8dc03534b069 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid_config, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 15:59:04 compute-0 podman[85104]: 2025-10-09 15:59:04.940343367 +0000 UTC m=+0.580401225 container start 8bd3bce4b9d45ad6492219fce820e7d87ab681f70dac3de1993b8dc03534b069 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid_config, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.build-date=20251007)
Oct 09 15:59:04 compute-0 iscsid_config[85271]: iqn.1994-05.com.redhat:e0e4bf961d5d
Oct 09 15:59:04 compute-0 systemd[1]: libpod-8bd3bce4b9d45ad6492219fce820e7d87ab681f70dac3de1993b8dc03534b069.scope: Deactivated successfully.
Oct 09 15:59:04 compute-0 podman[85104]: 2025-10-09 15:59:04.960748079 +0000 UTC m=+0.600805967 container attach 8bd3bce4b9d45ad6492219fce820e7d87ab681f70dac3de1993b8dc03534b069 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid_config, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 09 15:59:04 compute-0 podman[85104]: 2025-10-09 15:59:04.962602477 +0000 UTC m=+0.602660335 container died 8bd3bce4b9d45ad6492219fce820e7d87ab681f70dac3de1993b8dc03534b069 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid_config, org.label-schema.build-date=20251007, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 09 15:59:05 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 09 15:59:05 compute-0 kernel: veth0 (unregistering): left allmulticast mode
Oct 09 15:59:05 compute-0 kernel: veth0 (unregistering): left promiscuous mode
Oct 09 15:59:05 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 09 15:59:05 compute-0 NetworkManager[1028]: <info>  [1760025545.0677] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 15:59:05 compute-0 systemd[1]: run-netns-netns\x2d05d7754b\x2dcbf1\x2da810\x2df680\x2d3361a3d1c1fc.mount: Deactivated successfully.
Oct 09 15:59:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8bd3bce4b9d45ad6492219fce820e7d87ab681f70dac3de1993b8dc03534b069-userdata-shm.mount: Deactivated successfully.
Oct 09 15:59:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-b555f9fe8f619ae91520f750395bb883ca102036903e576d0b822505e81ae65e-merged.mount: Deactivated successfully.
Oct 09 15:59:05 compute-0 podman[85104]: 2025-10-09 15:59:05.496811359 +0000 UTC m=+1.136869217 container remove 8bd3bce4b9d45ad6492219fce820e7d87ab681f70dac3de1993b8dc03534b069 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Oct 09 15:59:05 compute-0 systemd[1]: libpod-conmon-8bd3bce4b9d45ad6492219fce820e7d87ab681f70dac3de1993b8dc03534b069.scope: Deactivated successfully.
Oct 09 15:59:05 compute-0 python3.9[85070]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True 38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest /usr/sbin/iscsi-iname
Oct 09 15:59:05 compute-0 python3.9[85070]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: 
                                            DEPRECATED command:
                                            It is recommended to use Quadlets for running containers and pods under systemd.
                                            
                                            Please refer to podman-systemd.unit(5) for details.
                                            Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 09 15:59:05 compute-0 sudo[85068]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:06 compute-0 sudo[85525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-panvopmcnailwfjryeryyvkxayuizvhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025545.8117406-218-174082587644706/AnsiballZ_stat.py'
Oct 09 15:59:06 compute-0 sudo[85525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:06 compute-0 podman[85487]: 2025-10-09 15:59:06.122646961 +0000 UTC m=+0.066995628 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 15:59:06 compute-0 python3.9[85533]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:59:06 compute-0 sudo[85525]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:06 compute-0 sudo[85654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfylbpzcwkmuntfkmxswaussjzpqvvad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025545.8117406-218-174082587644706/AnsiballZ_copy.py'
Oct 09 15:59:06 compute-0 sudo[85654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:07 compute-0 python3.9[85656]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025545.8117406-218-174082587644706/.source.iscsi _original_basename=.ewq8_97m follow=False checksum=3336b25557fcab9819dc5cdb47aafeeb4d3ca664 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:07 compute-0 sudo[85654]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:07 compute-0 sudo[85806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgjviiyhzwdnowkzvppusydmgnncyydz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025547.3017354-248-168814135200965/AnsiballZ_file.py'
Oct 09 15:59:07 compute-0 sudo[85806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:07 compute-0 python3.9[85808]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:07 compute-0 sudo[85806]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:08 compute-0 python3.9[85958]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 15:59:09 compute-0 sudo[86110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyohkoukjrdgjrncwhckpyajxkuouiiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025548.792961-282-55329923970370/AnsiballZ_lineinfile.py'
Oct 09 15:59:09 compute-0 sudo[86110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:09 compute-0 python3.9[86112]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:09 compute-0 sudo[86110]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:10 compute-0 sudo[86262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqphkfhmzmghwndpsnhzifgtjpictpqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025549.8425272-300-118922503867262/AnsiballZ_file.py'
Oct 09 15:59:10 compute-0 sudo[86262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:10 compute-0 python3.9[86264]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:59:10 compute-0 sudo[86262]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:10 compute-0 sudo[86414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soceokczkianhtreueahmhbuxdjxtltf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025550.5695634-316-59626363592038/AnsiballZ_stat.py'
Oct 09 15:59:10 compute-0 sudo[86414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:11 compute-0 python3.9[86416]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:59:11 compute-0 sudo[86414]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:11 compute-0 sudo[86492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsibzdmhqnoechbotimanxairlzrcbrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025550.5695634-316-59626363592038/AnsiballZ_file.py'
Oct 09 15:59:11 compute-0 sudo[86492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:11 compute-0 python3.9[86494]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:59:11 compute-0 sudo[86492]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:11 compute-0 sudo[86644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djhkvbvheotxeayfcwnwosvysicuvbpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025551.6040158-316-279180719092757/AnsiballZ_stat.py'
Oct 09 15:59:11 compute-0 sudo[86644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:12 compute-0 python3.9[86646]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:59:12 compute-0 sudo[86644]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:12 compute-0 sudo[86722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-remeqqbyoxkpazrenxvocqvnzhzaypzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025551.6040158-316-279180719092757/AnsiballZ_file.py'
Oct 09 15:59:12 compute-0 sudo[86722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:12 compute-0 python3.9[86724]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:59:12 compute-0 sudo[86722]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:13 compute-0 sudo[86874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuvhtyieodgnmqyswtussvwtceprkjao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025552.9009933-362-276163538128916/AnsiballZ_file.py'
Oct 09 15:59:13 compute-0 sudo[86874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:13 compute-0 python3.9[86876]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:13 compute-0 sudo[86874]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:13 compute-0 podman[86925]: 2025-10-09 15:59:13.886400587 +0000 UTC m=+0.104175441 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251007)
Oct 09 15:59:14 compute-0 sudo[87053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nilvtbcpihooihfutltgiqxtzurpuhfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025553.7294319-378-148623013376862/AnsiballZ_stat.py'
Oct 09 15:59:14 compute-0 sudo[87053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:14 compute-0 python3.9[87055]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:59:14 compute-0 sudo[87053]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:14 compute-0 sudo[87131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukkezqmlrweufulpsagghuiswytctpat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025553.7294319-378-148623013376862/AnsiballZ_file.py'
Oct 09 15:59:14 compute-0 sudo[87131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:14 compute-0 python3.9[87133]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:14 compute-0 sudo[87131]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:15 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 09 15:59:15 compute-0 sudo[87283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkpfmjpprvazojbdyeapdndsjfxvczdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025555.0652318-402-222605041277532/AnsiballZ_stat.py'
Oct 09 15:59:15 compute-0 sudo[87283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:15 compute-0 python3.9[87285]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:59:15 compute-0 sudo[87283]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:15 compute-0 sudo[87361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkolhsoyyyxneqndsazvovwkjpiqinug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025555.0652318-402-222605041277532/AnsiballZ_file.py'
Oct 09 15:59:15 compute-0 sudo[87361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:16 compute-0 python3.9[87363]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:16 compute-0 sudo[87361]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:16 compute-0 sudo[87513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nspprbgeikjkbgovgljrobgougthcfdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025556.3446655-426-132939767838788/AnsiballZ_systemd.py'
Oct 09 15:59:16 compute-0 sudo[87513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:16 compute-0 python3.9[87515]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 15:59:16 compute-0 systemd[1]: Reloading.
Oct 09 15:59:17 compute-0 systemd-rc-local-generator[87544]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:59:17 compute-0 systemd-sysv-generator[87548]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:59:17 compute-0 sudo[87513]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:17 compute-0 sudo[87703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twxqtybbprtjghhbstomopdotlboezaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025557.5562775-442-270728118284969/AnsiballZ_stat.py'
Oct 09 15:59:17 compute-0 sudo[87703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:18 compute-0 python3.9[87705]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:59:18 compute-0 sudo[87703]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:18 compute-0 sudo[87781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taglnvqrnvjanflhsgpctbbwubjhefij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025557.5562775-442-270728118284969/AnsiballZ_file.py'
Oct 09 15:59:18 compute-0 sudo[87781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:18 compute-0 python3.9[87783]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:18 compute-0 sudo[87781]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:19 compute-0 sudo[87933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmugebtyfffgmhrdrsmgjlmuhnhcrrzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025558.8828175-466-117220984375060/AnsiballZ_stat.py'
Oct 09 15:59:19 compute-0 sudo[87933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:19 compute-0 python3.9[87935]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:59:19 compute-0 sudo[87933]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:19 compute-0 sudo[88011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykkbsaeiblhwozkrnwcpqjgmvjzupcbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025558.8828175-466-117220984375060/AnsiballZ_file.py'
Oct 09 15:59:19 compute-0 sudo[88011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:19 compute-0 python3.9[88013]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:19 compute-0 sudo[88011]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:20 compute-0 sudo[88163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akfxxhvyqzvajxfpyzyuzuiyrwxyifzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025560.142625-490-156779058619325/AnsiballZ_systemd.py'
Oct 09 15:59:20 compute-0 sudo[88163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:20 compute-0 python3.9[88165]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 15:59:20 compute-0 systemd[1]: Reloading.
Oct 09 15:59:20 compute-0 systemd-sysv-generator[88195]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:59:20 compute-0 systemd-rc-local-generator[88191]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:59:21 compute-0 systemd[1]: Starting Create netns directory...
Oct 09 15:59:21 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 09 15:59:21 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 09 15:59:21 compute-0 systemd[1]: Finished Create netns directory.
Oct 09 15:59:21 compute-0 sudo[88163]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:21 compute-0 sudo[88357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zempqgnmkqwacdueyrisfzyqtzqypryt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025561.5606284-510-21019892450164/AnsiballZ_file.py'
Oct 09 15:59:21 compute-0 sudo[88357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:22 compute-0 python3.9[88359]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:59:22 compute-0 sudo[88357]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:22 compute-0 sudo[88509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxaylwvnvnqvlhuxcjizccgjzohgwukc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025562.3167844-526-65052158555203/AnsiballZ_stat.py'
Oct 09 15:59:22 compute-0 sudo[88509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:22 compute-0 python3.9[88511]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:59:22 compute-0 sudo[88509]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:23 compute-0 sudo[88632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpeapuadcanezidjjwdqudwcdxvtvxtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025562.3167844-526-65052158555203/AnsiballZ_copy.py'
Oct 09 15:59:23 compute-0 sudo[88632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:23 compute-0 python3.9[88634]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760025562.3167844-526-65052158555203/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:59:23 compute-0 sudo[88632]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:24 compute-0 sudo[88784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcqibcivstftqjahzypmhqqalvssjnhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025564.0653746-560-273893903440974/AnsiballZ_file.py'
Oct 09 15:59:24 compute-0 sudo[88784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:24 compute-0 python3.9[88786]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:59:24 compute-0 sudo[88784]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:25 compute-0 sudo[88936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elvqcwrtxismmkmlszosaxyeqsvqvfcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025564.8349257-576-70700122659236/AnsiballZ_stat.py'
Oct 09 15:59:25 compute-0 sudo[88936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:25 compute-0 python3.9[88938]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:59:25 compute-0 sudo[88936]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:25 compute-0 sudo[89059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxfjnexngmqicwzqhcfdgwepwrrsslvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025564.8349257-576-70700122659236/AnsiballZ_copy.py'
Oct 09 15:59:25 compute-0 sudo[89059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:25 compute-0 python3.9[89061]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025564.8349257-576-70700122659236/.source.json _original_basename=.ba9qrwa3 follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:25 compute-0 sudo[89059]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:26 compute-0 sudo[89211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eptuaqgwfblngsnhvulwmifpoxiyaixo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025566.2416105-606-268429165840857/AnsiballZ_file.py'
Oct 09 15:59:26 compute-0 sudo[89211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:26 compute-0 python3.9[89213]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:26 compute-0 sudo[89211]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:27 compute-0 sudo[89363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhrqdnzeitoorqfviekedoygmbxyvmkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025567.0338-622-43521182737349/AnsiballZ_stat.py'
Oct 09 15:59:27 compute-0 sudo[89363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:27 compute-0 sudo[89363]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:27 compute-0 sudo[89486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngqgndhngemwixxihzewatnnjctdobhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025567.0338-622-43521182737349/AnsiballZ_copy.py'
Oct 09 15:59:27 compute-0 sudo[89486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:28 compute-0 sudo[89486]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:28 compute-0 sudo[89638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svasvepwvydneybttpkybriavwgccvbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025568.4818528-656-238591435010236/AnsiballZ_container_config_data.py'
Oct 09 15:59:28 compute-0 sudo[89638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:29 compute-0 python3.9[89640]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 09 15:59:29 compute-0 sudo[89638]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:29 compute-0 sudo[89790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhqugsnhzyjeyyhorsxdsjtdxedbrpzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025569.4665575-674-81339630291280/AnsiballZ_container_config_hash.py'
Oct 09 15:59:29 compute-0 sudo[89790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:30 compute-0 python3.9[89792]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 15:59:30 compute-0 sudo[89790]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:30 compute-0 sudo[89942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yltbrjzxnwkgqmefevgargtzdfibjhhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025570.4083405-692-145768142424464/AnsiballZ_podman_container_info.py'
Oct 09 15:59:30 compute-0 sudo[89942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:31 compute-0 python3.9[89944]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 09 15:59:31 compute-0 sudo[89942]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:32 compute-0 sudo[90120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdyjzdfcaojiynekooklbdlqwxlszpcu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760025571.8756547-718-2175982243901/AnsiballZ_edpm_container_manage.py'
Oct 09 15:59:32 compute-0 sudo[90120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:32 compute-0 python3[90122]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 15:59:32 compute-0 podman[90157]: 2025-10-09 15:59:32.811270506 +0000 UTC m=+0.022287722 image pull 350c9f95f5081e17dae32f2e34a10922b473e3eeecb503bbb4233dbe58df2c45 38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest
Oct 09 15:59:33 compute-0 podman[90157]: 2025-10-09 15:59:33.034858557 +0000 UTC m=+0.245875763 container create 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, config_id=iscsid, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251007)
Oct 09 15:59:33 compute-0 python3[90122]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z 38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest
Oct 09 15:59:33 compute-0 sudo[90120]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:33 compute-0 sudo[90345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljewxsiggomzojxfnaihmlkufkurpncu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025573.3378212-734-67880289714902/AnsiballZ_stat.py'
Oct 09 15:59:33 compute-0 sudo[90345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:33 compute-0 python3.9[90347]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 15:59:33 compute-0 sudo[90345]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:34 compute-0 sudo[90499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unovlebaetnlkgrrrsohyhaspetrutvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025574.0947692-752-57017755577937/AnsiballZ_file.py'
Oct 09 15:59:34 compute-0 sudo[90499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:34 compute-0 python3.9[90501]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:34 compute-0 sudo[90499]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:34 compute-0 sudo[90575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeecbovtizweocmiqgnfapcdjwesnrya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025574.0947692-752-57017755577937/AnsiballZ_stat.py'
Oct 09 15:59:34 compute-0 sudo[90575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:35 compute-0 python3.9[90577]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 15:59:35 compute-0 sudo[90575]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:59:35.262 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 15:59:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:59:35.263 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 15:59:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 15:59:35.263 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 15:59:35 compute-0 sudo[90727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhslkbtnbvpypdwkndlndifulvepyqqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025575.0856612-752-231702379761976/AnsiballZ_copy.py'
Oct 09 15:59:35 compute-0 sudo[90727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:35 compute-0 python3.9[90729]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760025575.0856612-752-231702379761976/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:35 compute-0 sudo[90727]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:35 compute-0 sudo[90803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwzcdkdzajstdtfrlxuazakooeqfnkfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025575.0856612-752-231702379761976/AnsiballZ_systemd.py'
Oct 09 15:59:35 compute-0 sudo[90803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:36 compute-0 python3.9[90805]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 15:59:36 compute-0 systemd[1]: Reloading.
Oct 09 15:59:36 compute-0 systemd-rc-local-generator[90851]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:59:36 compute-0 systemd-sysv-generator[90855]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:59:36 compute-0 podman[90807]: 2025-10-09 15:59:36.373352259 +0000 UTC m=+0.086996468 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 09 15:59:36 compute-0 sudo[90803]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:36 compute-0 sudo[90933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trmpivizqjajvdlxhcbxavfuaxndltwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025575.0856612-752-231702379761976/AnsiballZ_systemd.py'
Oct 09 15:59:36 compute-0 sudo[90933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:37 compute-0 python3.9[90935]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 15:59:37 compute-0 systemd[1]: Reloading.
Oct 09 15:59:37 compute-0 systemd-sysv-generator[90968]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 15:59:37 compute-0 systemd-rc-local-generator[90963]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 15:59:37 compute-0 systemd[1]: Starting iscsid container...
Oct 09 15:59:37 compute-0 systemd[1]: Started libcrun container.
Oct 09 15:59:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468d89e9c9419a347ea4703444dcccd0de3ba5cd0e03696125e0859528805366/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 09 15:59:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468d89e9c9419a347ea4703444dcccd0de3ba5cd0e03696125e0859528805366/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 09 15:59:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468d89e9c9419a347ea4703444dcccd0de3ba5cd0e03696125e0859528805366/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 09 15:59:37 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820.
Oct 09 15:59:37 compute-0 podman[90975]: 2025-10-09 15:59:37.548233683 +0000 UTC m=+0.109702002 container init 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 15:59:37 compute-0 iscsid[90990]: + sudo -E kolla_set_configs
Oct 09 15:59:37 compute-0 podman[90975]: 2025-10-09 15:59:37.570035679 +0000 UTC m=+0.131503968 container start 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 15:59:37 compute-0 sudo[90996]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 09 15:59:37 compute-0 podman[90975]: iscsid
Oct 09 15:59:37 compute-0 systemd[1]: Started iscsid container.
Oct 09 15:59:37 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 09 15:59:37 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 09 15:59:37 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 09 15:59:37 compute-0 sudo[90933]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:37 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 09 15:59:37 compute-0 systemd[91012]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 09 15:59:37 compute-0 podman[90997]: 2025-10-09 15:59:37.660664099 +0000 UTC m=+0.079027382 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251007, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 09 15:59:37 compute-0 systemd[1]: 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820-2920dcd99ecf44c1.service: Main process exited, code=exited, status=1/FAILURE
Oct 09 15:59:37 compute-0 systemd[1]: 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820-2920dcd99ecf44c1.service: Failed with result 'exit-code'.
Oct 09 15:59:37 compute-0 systemd[91012]: Queued start job for default target Main User Target.
Oct 09 15:59:37 compute-0 systemd[91012]: Created slice User Application Slice.
Oct 09 15:59:37 compute-0 systemd[91012]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 09 15:59:37 compute-0 systemd[91012]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 15:59:37 compute-0 systemd[91012]: Reached target Paths.
Oct 09 15:59:37 compute-0 systemd[91012]: Reached target Timers.
Oct 09 15:59:37 compute-0 systemd[91012]: Starting D-Bus User Message Bus Socket...
Oct 09 15:59:37 compute-0 systemd[91012]: Starting Create User's Volatile Files and Directories...
Oct 09 15:59:37 compute-0 systemd[91012]: Finished Create User's Volatile Files and Directories.
Oct 09 15:59:37 compute-0 systemd[91012]: Listening on D-Bus User Message Bus Socket.
Oct 09 15:59:37 compute-0 systemd[91012]: Reached target Sockets.
Oct 09 15:59:37 compute-0 systemd[91012]: Reached target Basic System.
Oct 09 15:59:37 compute-0 systemd[91012]: Reached target Main User Target.
Oct 09 15:59:37 compute-0 systemd[91012]: Startup finished in 125ms.
Oct 09 15:59:37 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 09 15:59:37 compute-0 systemd[1]: Started Session c3 of User root.
Oct 09 15:59:37 compute-0 sudo[90996]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 09 15:59:37 compute-0 iscsid[90990]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 15:59:37 compute-0 iscsid[90990]: INFO:__main__:Validating config file
Oct 09 15:59:37 compute-0 iscsid[90990]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 15:59:37 compute-0 iscsid[90990]: INFO:__main__:Writing out command to execute
Oct 09 15:59:37 compute-0 sudo[90996]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:37 compute-0 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 09 15:59:37 compute-0 iscsid[90990]: ++ cat /run_command
Oct 09 15:59:37 compute-0 iscsid[90990]: + CMD='/usr/sbin/iscsid -f'
Oct 09 15:59:37 compute-0 iscsid[90990]: + ARGS=
Oct 09 15:59:37 compute-0 iscsid[90990]: + sudo kolla_copy_cacerts
Oct 09 15:59:37 compute-0 sudo[91112]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 09 15:59:37 compute-0 systemd[1]: Started Session c4 of User root.
Oct 09 15:59:37 compute-0 sudo[91112]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 09 15:59:37 compute-0 sudo[91112]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:37 compute-0 iscsid[90990]: + [[ ! -n '' ]]
Oct 09 15:59:37 compute-0 iscsid[90990]: + . kolla_extend_start
Oct 09 15:59:37 compute-0 iscsid[90990]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 09 15:59:37 compute-0 iscsid[90990]: Running command: '/usr/sbin/iscsid -f'
Oct 09 15:59:37 compute-0 iscsid[90990]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 09 15:59:37 compute-0 iscsid[90990]: + umask 0022
Oct 09 15:59:37 compute-0 iscsid[90990]: + exec /usr/sbin/iscsid -f
Oct 09 15:59:37 compute-0 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 09 15:59:37 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Oct 09 15:59:38 compute-0 python3.9[91195]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 15:59:38 compute-0 sudo[91345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqmbzfdrxjohcjlwxdjoyhtzgultdekz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025578.4089804-826-159329672314382/AnsiballZ_file.py'
Oct 09 15:59:38 compute-0 sudo[91345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:38 compute-0 python3.9[91347]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:38 compute-0 sudo[91345]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:39 compute-0 sudo[91497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vamcoaeybbbwhzaiejnmyjwxzbiyymdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025579.2047846-848-187022747896998/AnsiballZ_service_facts.py'
Oct 09 15:59:39 compute-0 sudo[91497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:39 compute-0 python3.9[91499]: ansible-ansible.builtin.service_facts Invoked
Oct 09 15:59:39 compute-0 network[91516]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 09 15:59:39 compute-0 network[91517]: 'network-scripts' will be removed from distribution in near future.
Oct 09 15:59:39 compute-0 network[91518]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 09 15:59:42 compute-0 sudo[91497]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:43 compute-0 sudo[91790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdoqttmaxtdlnbjdrcidcmpryonzwwms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025582.8584514-868-9689667463641/AnsiballZ_file.py'
Oct 09 15:59:43 compute-0 sudo[91790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:43 compute-0 python3.9[91792]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 09 15:59:43 compute-0 sudo[91790]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:43 compute-0 sudo[91953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlhbdwnesbwxcgempcinjyjjapjiukfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025583.515987-884-114665600141912/AnsiballZ_modprobe.py'
Oct 09 15:59:44 compute-0 sudo[91953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:44 compute-0 podman[91916]: 2025-10-09 15:59:44.071429858 +0000 UTC m=+0.105543652 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 09 15:59:44 compute-0 python3.9[91959]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 09 15:59:44 compute-0 sudo[91953]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:44 compute-0 sudo[92124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npacaknaaotwrqsnqtlvcixsfbuuhpfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025584.3801568-900-153656632539007/AnsiballZ_stat.py'
Oct 09 15:59:44 compute-0 sudo[92124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:44 compute-0 python3.9[92126]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:59:44 compute-0 sudo[92124]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:45 compute-0 sudo[92247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryckgyxhytqthdnszxfwpexzkgoicwsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025584.3801568-900-153656632539007/AnsiballZ_copy.py'
Oct 09 15:59:45 compute-0 sudo[92247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:45 compute-0 python3.9[92249]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025584.3801568-900-153656632539007/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:45 compute-0 sudo[92247]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:45 compute-0 sudo[92399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeajopsohudfqoskhqnsrqomvhkzuxnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025585.6418753-932-67811331935155/AnsiballZ_lineinfile.py'
Oct 09 15:59:45 compute-0 sudo[92399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:46 compute-0 python3.9[92401]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:46 compute-0 sudo[92399]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:46 compute-0 sudo[92551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmjwluewsrsgeaixnxpqnoojcxwuxryd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025586.2761025-948-128916368152805/AnsiballZ_systemd.py'
Oct 09 15:59:46 compute-0 sudo[92551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:46 compute-0 python3.9[92553]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 15:59:46 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 09 15:59:46 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 09 15:59:46 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 09 15:59:46 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 09 15:59:46 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 09 15:59:46 compute-0 sudo[92551]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:47 compute-0 sudo[92707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyxyssarcfdvrzznniufijrodplkyqmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025587.161779-964-150614309321854/AnsiballZ_file.py'
Oct 09 15:59:47 compute-0 sudo[92707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:47 compute-0 python3.9[92709]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:59:47 compute-0 sudo[92707]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:47 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 09 15:59:47 compute-0 systemd[91012]: Activating special unit Exit the Session...
Oct 09 15:59:47 compute-0 systemd[91012]: Stopped target Main User Target.
Oct 09 15:59:47 compute-0 systemd[91012]: Stopped target Basic System.
Oct 09 15:59:47 compute-0 systemd[91012]: Stopped target Paths.
Oct 09 15:59:47 compute-0 systemd[91012]: Stopped target Sockets.
Oct 09 15:59:47 compute-0 systemd[91012]: Stopped target Timers.
Oct 09 15:59:47 compute-0 systemd[91012]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 09 15:59:47 compute-0 systemd[91012]: Closed D-Bus User Message Bus Socket.
Oct 09 15:59:47 compute-0 systemd[91012]: Stopped Create User's Volatile Files and Directories.
Oct 09 15:59:47 compute-0 systemd[91012]: Removed slice User Application Slice.
Oct 09 15:59:47 compute-0 systemd[91012]: Reached target Shutdown.
Oct 09 15:59:47 compute-0 systemd[91012]: Finished Exit the Session.
Oct 09 15:59:47 compute-0 systemd[91012]: Reached target Exit the Session.
Oct 09 15:59:48 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 09 15:59:48 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 09 15:59:48 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 09 15:59:48 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 09 15:59:48 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 09 15:59:48 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 09 15:59:48 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 09 15:59:48 compute-0 sudo[92861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnirsapircexhugqbuyyhvdkngjcqkrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025587.8873763-982-36714901194069/AnsiballZ_stat.py'
Oct 09 15:59:48 compute-0 sudo[92861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:48 compute-0 python3.9[92863]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 15:59:48 compute-0 sudo[92861]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:48 compute-0 sudo[93013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzouexavzhvmqcluepivoiuueozomlar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025588.556774-1000-116633324684267/AnsiballZ_stat.py'
Oct 09 15:59:48 compute-0 sudo[93013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:48 compute-0 python3.9[93015]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 15:59:49 compute-0 sudo[93013]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:49 compute-0 sudo[93165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vznmamwjwumydjisdpuedufmstawnooo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025589.205248-1016-148287738544918/AnsiballZ_stat.py'
Oct 09 15:59:49 compute-0 sudo[93165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:49 compute-0 python3.9[93167]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:59:49 compute-0 sudo[93165]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:50 compute-0 sudo[93288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeywbysxwvwrxhmdfjykuosopulvbfuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025589.205248-1016-148287738544918/AnsiballZ_copy.py'
Oct 09 15:59:50 compute-0 sudo[93288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:50 compute-0 python3.9[93290]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025589.205248-1016-148287738544918/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:50 compute-0 sudo[93288]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:50 compute-0 sudo[93440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbrctygnrmrfxmvictvzmxyjthxhwukx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025590.394748-1046-32885875666036/AnsiballZ_command.py'
Oct 09 15:59:50 compute-0 sudo[93440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:50 compute-0 python3.9[93442]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 15:59:51 compute-0 sudo[93440]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:51 compute-0 sudo[93593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxbintvytimscotovtskqvfpvjziehvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025591.1596-1062-66372747674459/AnsiballZ_lineinfile.py'
Oct 09 15:59:51 compute-0 sudo[93593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:51 compute-0 python3.9[93595]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:51 compute-0 sudo[93593]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:52 compute-0 sudo[93745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hplzehgkxpzeoickxnufkyprgufvlkap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025591.8245876-1078-256459357890131/AnsiballZ_replace.py'
Oct 09 15:59:52 compute-0 sudo[93745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:52 compute-0 python3.9[93747]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:52 compute-0 sudo[93745]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:52 compute-0 sudo[93897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bicyecwzockmesxyhtxikvqadpqvrtmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025592.588726-1094-115440039200310/AnsiballZ_replace.py'
Oct 09 15:59:52 compute-0 sudo[93897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:53 compute-0 python3.9[93899]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:53 compute-0 sudo[93897]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:53 compute-0 sudo[94049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlezztpfyxxyhdchgtmbspmnfgsbkdlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025593.2979898-1112-33398230583245/AnsiballZ_lineinfile.py'
Oct 09 15:59:53 compute-0 sudo[94049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:53 compute-0 python3.9[94051]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:53 compute-0 sudo[94049]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:54 compute-0 sudo[94201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnahxugdyoogggzozkapcvzhckbvxbnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025593.906215-1112-59329068445116/AnsiballZ_lineinfile.py'
Oct 09 15:59:54 compute-0 sudo[94201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:54 compute-0 python3.9[94203]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:54 compute-0 sudo[94201]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:54 compute-0 sudo[94353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elquxvluzdexyncvwhljcfsuenzsrtwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025594.533264-1112-168877534172560/AnsiballZ_lineinfile.py'
Oct 09 15:59:54 compute-0 sudo[94353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:54 compute-0 python3.9[94355]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:55 compute-0 sudo[94353]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:55 compute-0 sudo[94505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htrtpnpcoiaekqgogovwrmseyyvdmqzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025595.1344597-1112-132455598454973/AnsiballZ_lineinfile.py'
Oct 09 15:59:55 compute-0 sudo[94505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:55 compute-0 python3.9[94507]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:55 compute-0 sudo[94505]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:55 compute-0 sudo[94657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cftzyqosmxelawrhiozuecgkcncaclpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025595.7384262-1170-141005869684099/AnsiballZ_stat.py'
Oct 09 15:59:55 compute-0 sudo[94657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:56 compute-0 python3.9[94659]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 15:59:56 compute-0 sudo[94657]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:56 compute-0 sudo[94811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxjuchmnvxqiusamdahmkozgrzuccikf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025596.3666387-1186-89824159640377/AnsiballZ_file.py'
Oct 09 15:59:56 compute-0 sudo[94811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:56 compute-0 python3.9[94813]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 15:59:56 compute-0 sudo[94811]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:57 compute-0 sudo[94963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zokhxhjfkwpqyucpxyzopycrkweqjtkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025597.1065412-1204-57769085328717/AnsiballZ_file.py'
Oct 09 15:59:57 compute-0 sudo[94963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:57 compute-0 python3.9[94965]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:59:57 compute-0 sudo[94963]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:57 compute-0 sudo[95115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyhiyrjngzprkelchjirrqxpkwjqrwuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025597.7450335-1220-47077977039344/AnsiballZ_stat.py'
Oct 09 15:59:57 compute-0 sudo[95115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:58 compute-0 python3.9[95117]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:59:58 compute-0 sudo[95115]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:58 compute-0 sudo[95193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxpffobbbvhbdsjopsayfvpbimgqhjkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025597.7450335-1220-47077977039344/AnsiballZ_file.py'
Oct 09 15:59:58 compute-0 sudo[95193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:58 compute-0 python3.9[95195]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:59:58 compute-0 sudo[95193]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:58 compute-0 sudo[95345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsbaunxrzjgzfitcrmhlisorxwhzcvpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025598.731202-1220-248205350619894/AnsiballZ_stat.py'
Oct 09 15:59:58 compute-0 sudo[95345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:59 compute-0 python3.9[95347]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 15:59:59 compute-0 sudo[95345]: pam_unix(sudo:session): session closed for user root
Oct 09 15:59:59 compute-0 sudo[95423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkxhmpitvadmcpxfmwqnzrkntkrfegvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025598.731202-1220-248205350619894/AnsiballZ_file.py'
Oct 09 15:59:59 compute-0 sudo[95423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 15:59:59 compute-0 python3.9[95425]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 15:59:59 compute-0 sudo[95423]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:00 compute-0 systemd[1]: Starting system activity accounting tool...
Oct 09 16:00:00 compute-0 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 09 16:00:00 compute-0 systemd[1]: Finished system activity accounting tool.
Oct 09 16:00:00 compute-0 sudo[95576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvqwzdfqfxncoiikihibrcpctbjargbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025599.8427184-1266-100337417814790/AnsiballZ_file.py'
Oct 09 16:00:00 compute-0 sudo[95576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:00 compute-0 python3.9[95578]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:00 compute-0 sudo[95576]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:00 compute-0 sudo[95728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atdwgtmmfxaefqndjvdxaxkzolbynibr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025600.5896497-1282-37764263634141/AnsiballZ_stat.py'
Oct 09 16:00:00 compute-0 sudo[95728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:01 compute-0 python3.9[95730]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:00:01 compute-0 sudo[95728]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:01 compute-0 sudo[95806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xglqfgsymquboqpwajovkjpwwcwxljjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025600.5896497-1282-37764263634141/AnsiballZ_file.py'
Oct 09 16:00:01 compute-0 sudo[95806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:01 compute-0 python3.9[95808]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:01 compute-0 sudo[95806]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:02 compute-0 sudo[95958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nocwovytfhacgyltdklcmpfzdjpgzpnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025601.887435-1306-274723185495509/AnsiballZ_stat.py'
Oct 09 16:00:02 compute-0 sudo[95958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:02 compute-0 python3.9[95960]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:00:02 compute-0 sudo[95958]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:02 compute-0 sudo[96036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqhzwmfozvqmnevneoefmroufenrferm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025601.887435-1306-274723185495509/AnsiballZ_file.py'
Oct 09 16:00:02 compute-0 sudo[96036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:02 compute-0 python3.9[96038]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:02 compute-0 sudo[96036]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:03 compute-0 sudo[96188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otudlybhwdqwromejijvlqiupcuuxliw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025603.0488787-1330-31877100068849/AnsiballZ_systemd.py'
Oct 09 16:00:03 compute-0 sudo[96188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:03 compute-0 python3.9[96190]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 16:00:03 compute-0 systemd[1]: Reloading.
Oct 09 16:00:03 compute-0 systemd-sysv-generator[96223]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:00:03 compute-0 systemd-rc-local-generator[96219]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:00:04 compute-0 sudo[96188]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:04 compute-0 sudo[96378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mblesnsppttsyzbkveqzrvvyiozzhuxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025604.2667725-1346-260808629933221/AnsiballZ_stat.py'
Oct 09 16:00:04 compute-0 sudo[96378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:04 compute-0 python3.9[96380]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:00:04 compute-0 sudo[96378]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:04 compute-0 sudo[96456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikgsqeomabdmsgudipmtkfrcbmeczlfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025604.2667725-1346-260808629933221/AnsiballZ_file.py'
Oct 09 16:00:04 compute-0 sudo[96456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:05 compute-0 python3.9[96458]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:05 compute-0 sudo[96456]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:05 compute-0 sudo[96608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jddehkdwzsrjpkmxmhcoapmjofkgrslg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025605.455582-1370-61591875393596/AnsiballZ_stat.py'
Oct 09 16:00:05 compute-0 sudo[96608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:05 compute-0 python3.9[96610]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:00:05 compute-0 sudo[96608]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:06 compute-0 sudo[96686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puvygmozrqwfabcigvvwwfmrgqznmokd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025605.455582-1370-61591875393596/AnsiballZ_file.py'
Oct 09 16:00:06 compute-0 sudo[96686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:06 compute-0 python3.9[96688]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:06 compute-0 sudo[96686]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:06 compute-0 podman[96788]: 2025-10-09 16:00:06.84060758 +0000 UTC m=+0.060096557 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 09 16:00:06 compute-0 sudo[96858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqamzxzigzwpxcansoaviqemqjcgsibv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025606.6191788-1394-148798772893136/AnsiballZ_systemd.py'
Oct 09 16:00:06 compute-0 sudo[96858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:07 compute-0 python3.9[96860]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 16:00:07 compute-0 systemd[1]: Reloading.
Oct 09 16:00:07 compute-0 systemd-rc-local-generator[96885]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:00:07 compute-0 systemd-sysv-generator[96889]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:00:07 compute-0 systemd[1]: Starting Create netns directory...
Oct 09 16:00:07 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 09 16:00:07 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 09 16:00:07 compute-0 systemd[1]: Finished Create netns directory.
Oct 09 16:00:07 compute-0 sudo[96858]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:07 compute-0 podman[96926]: 2025-10-09 16:00:07.847197057 +0000 UTC m=+0.083644198 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 16:00:08 compute-0 sudo[97072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkhsiqggralhwstyfeceuvcdnitrevfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025607.8639886-1414-68828025289765/AnsiballZ_file.py'
Oct 09 16:00:08 compute-0 sudo[97072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:08 compute-0 python3.9[97074]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:00:08 compute-0 sudo[97072]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:08 compute-0 sudo[97224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qobhxnpvwvaiwmjxutzrnlzpedknupsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025608.603549-1430-72213198612904/AnsiballZ_stat.py'
Oct 09 16:00:08 compute-0 sudo[97224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:09 compute-0 python3.9[97226]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:00:09 compute-0 sudo[97224]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:09 compute-0 sudo[97347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyxdtgwhgkpgjqnggeipigktqsovqdme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025608.603549-1430-72213198612904/AnsiballZ_copy.py'
Oct 09 16:00:09 compute-0 sudo[97347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:09 compute-0 python3.9[97349]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760025608.603549-1430-72213198612904/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:00:09 compute-0 sudo[97347]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:10 compute-0 sudo[97499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrolekgelduppqbqnvlcqzdlpcmhdmwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025610.2508152-1464-174172528161149/AnsiballZ_file.py'
Oct 09 16:00:10 compute-0 sudo[97499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:10 compute-0 python3.9[97501]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:00:10 compute-0 sudo[97499]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:11 compute-0 sudo[97651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkzwtcjieaqgglbhgppohvszjpndddmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025611.0210505-1480-93165121657301/AnsiballZ_stat.py'
Oct 09 16:00:11 compute-0 sudo[97651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:11 compute-0 python3.9[97653]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:00:11 compute-0 sudo[97651]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:11 compute-0 sudo[97774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frhuaebpkwfomwhajuzedyfdsdmforcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025611.0210505-1480-93165121657301/AnsiballZ_copy.py'
Oct 09 16:00:11 compute-0 sudo[97774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:11 compute-0 python3.9[97776]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025611.0210505-1480-93165121657301/.source.json _original_basename=.bbx9vzl_ follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:11 compute-0 sudo[97774]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:12 compute-0 sudo[97926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkemegwpfupxaxmxniqeqlduosllupph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025612.3074062-1510-96185724963650/AnsiballZ_file.py'
Oct 09 16:00:12 compute-0 sudo[97926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:12 compute-0 python3.9[97928]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:12 compute-0 sudo[97926]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:13 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 09 16:00:13 compute-0 sudo[98079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvmduqidurejrcfjxwqdgyumdxrokadx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025613.0095155-1526-90289830735501/AnsiballZ_stat.py'
Oct 09 16:00:13 compute-0 sudo[98079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:13 compute-0 sudo[98079]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:13 compute-0 sudo[98202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oglgekxdiuncjidzipxkvcnpooaoycpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025613.0095155-1526-90289830735501/AnsiballZ_copy.py'
Oct 09 16:00:13 compute-0 sudo[98202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:14 compute-0 sudo[98202]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:14 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 09 16:00:14 compute-0 podman[98230]: 2025-10-09 16:00:14.333318984 +0000 UTC m=+0.075689757 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Oct 09 16:00:14 compute-0 sudo[98381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueqzuljgvjmwnmwgcnwixzcroowjtlym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025614.563121-1560-192327578205814/AnsiballZ_container_config_data.py'
Oct 09 16:00:14 compute-0 sudo[98381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:15 compute-0 python3.9[98383]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 09 16:00:15 compute-0 sudo[98381]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:15 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 09 16:00:15 compute-0 sudo[98534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssflbospubnfvikkeouwtpewiwnsphvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025615.408451-1578-267018286515158/AnsiballZ_container_config_hash.py'
Oct 09 16:00:15 compute-0 sudo[98534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:15 compute-0 python3.9[98536]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 16:00:15 compute-0 sudo[98534]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:16 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 09 16:00:16 compute-0 sudo[98687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtsmhkjcbayeuuprppkdtakwjfnjbzrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025616.186535-1596-178462426243312/AnsiballZ_podman_container_info.py'
Oct 09 16:00:16 compute-0 sudo[98687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:16 compute-0 python3.9[98689]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 09 16:00:16 compute-0 sudo[98687]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:17 compute-0 sudo[98866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrqcrecubuuhwjonxgumzwqngoavehic ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760025617.4916952-1622-275616601188618/AnsiballZ_edpm_container_manage.py'
Oct 09 16:00:17 compute-0 sudo[98866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:17 compute-0 python3[98868]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 16:00:18 compute-0 podman[98908]: 2025-10-09 16:00:18.153867336 +0000 UTC m=+0.046596954 container create 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4)
Oct 09 16:00:18 compute-0 podman[98908]: 2025-10-09 16:00:18.130002047 +0000 UTC m=+0.022731685 image pull 5cb4431d7fffc452f0f5acdd80f896b4af98d5c10c372715124512aeb368f770 38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest
Oct 09 16:00:18 compute-0 python3[98868]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z 38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest
Oct 09 16:00:18 compute-0 sudo[98866]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:18 compute-0 sudo[99093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aszjroupyrpuywenoitndfxiydcvhuln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025618.536923-1638-119134735893675/AnsiballZ_stat.py'
Oct 09 16:00:18 compute-0 sudo[99093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:18 compute-0 python3.9[99095]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 16:00:18 compute-0 sudo[99093]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:19 compute-0 sudo[99247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrwksgdacssccdazrkxcqiksodezgapa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025619.2642946-1656-173551533841370/AnsiballZ_file.py'
Oct 09 16:00:19 compute-0 sudo[99247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:19 compute-0 python3.9[99249]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:19 compute-0 sudo[99247]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:19 compute-0 sudo[99323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvqyevtaafwweqofsbiaoymjtxmtfbjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025619.2642946-1656-173551533841370/AnsiballZ_stat.py'
Oct 09 16:00:19 compute-0 sudo[99323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:20 compute-0 python3.9[99325]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 16:00:20 compute-0 sudo[99323]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:20 compute-0 sudo[99474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whqpqonoehmbuqiduntnnikcninvcgfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025620.1563993-1656-116764386408909/AnsiballZ_copy.py'
Oct 09 16:00:20 compute-0 sudo[99474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:20 compute-0 python3.9[99476]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760025620.1563993-1656-116764386408909/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:20 compute-0 sudo[99474]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:20 compute-0 sudo[99550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwdceneeklfkskferhaalwdoxwvdyleb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025620.1563993-1656-116764386408909/AnsiballZ_systemd.py'
Oct 09 16:00:20 compute-0 sudo[99550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:21 compute-0 python3.9[99552]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 16:00:21 compute-0 systemd[1]: Reloading.
Oct 09 16:00:21 compute-0 systemd-rc-local-generator[99580]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:00:21 compute-0 systemd-sysv-generator[99583]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:00:21 compute-0 sudo[99550]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:21 compute-0 sudo[99661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muyeqisniuguvkktdufyfsvgqvqmybxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025620.1563993-1656-116764386408909/AnsiballZ_systemd.py'
Oct 09 16:00:21 compute-0 sudo[99661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:22 compute-0 python3.9[99663]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 16:00:22 compute-0 systemd[1]: Reloading.
Oct 09 16:00:22 compute-0 systemd-rc-local-generator[99693]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:00:22 compute-0 systemd-sysv-generator[99696]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:00:22 compute-0 systemd[1]: Starting multipathd container...
Oct 09 16:00:22 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93c0f5fa81cf88861a80c562ed08c1ce05eb122f0655942102dc986de5c5aa54/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 09 16:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93c0f5fa81cf88861a80c562ed08c1ce05eb122f0655942102dc986de5c5aa54/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 09 16:00:22 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66.
Oct 09 16:00:22 compute-0 podman[99703]: 2025-10-09 16:00:22.477306287 +0000 UTC m=+0.095480969 container init 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:00:22 compute-0 multipathd[99718]: + sudo -E kolla_set_configs
Oct 09 16:00:22 compute-0 sudo[99724]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 09 16:00:22 compute-0 sudo[99724]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 09 16:00:22 compute-0 podman[99703]: 2025-10-09 16:00:22.512082748 +0000 UTC m=+0.130257440 container start 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=watcher_latest, config_id=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:00:22 compute-0 podman[99703]: multipathd
Oct 09 16:00:22 compute-0 systemd[1]: Started multipathd container.
Oct 09 16:00:22 compute-0 multipathd[99718]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 16:00:22 compute-0 multipathd[99718]: INFO:__main__:Validating config file
Oct 09 16:00:22 compute-0 multipathd[99718]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 16:00:22 compute-0 multipathd[99718]: INFO:__main__:Writing out command to execute
Oct 09 16:00:22 compute-0 sudo[99661]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:22 compute-0 sudo[99724]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:22 compute-0 multipathd[99718]: ++ cat /run_command
Oct 09 16:00:22 compute-0 multipathd[99718]: + CMD='/usr/sbin/multipathd -d'
Oct 09 16:00:22 compute-0 multipathd[99718]: + ARGS=
Oct 09 16:00:22 compute-0 multipathd[99718]: + sudo kolla_copy_cacerts
Oct 09 16:00:22 compute-0 sudo[99745]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 09 16:00:22 compute-0 sudo[99745]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 09 16:00:22 compute-0 sudo[99745]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:22 compute-0 multipathd[99718]: + [[ ! -n '' ]]
Oct 09 16:00:22 compute-0 multipathd[99718]: + . kolla_extend_start
Oct 09 16:00:22 compute-0 multipathd[99718]: Running command: '/usr/sbin/multipathd -d'
Oct 09 16:00:22 compute-0 multipathd[99718]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 09 16:00:22 compute-0 multipathd[99718]: + umask 0022
Oct 09 16:00:22 compute-0 multipathd[99718]: + exec /usr/sbin/multipathd -d
Oct 09 16:00:22 compute-0 podman[99725]: 2025-10-09 16:00:22.599906576 +0000 UTC m=+0.075912735 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 09 16:00:22 compute-0 multipathd[99718]: 566.262766 | multipathd v0.9.9: start up
Oct 09 16:00:22 compute-0 systemd[1]: 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66-60a33dcc951f2a9d.service: Main process exited, code=exited, status=1/FAILURE
Oct 09 16:00:22 compute-0 systemd[1]: 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66-60a33dcc951f2a9d.service: Failed with result 'exit-code'.
Oct 09 16:00:22 compute-0 multipathd[99718]: 566.272182 | reconfigure: setting up paths and maps
Oct 09 16:00:22 compute-0 multipathd[99718]: 566.273690 | _check_bindings_file: failed to read header from /etc/multipath/bindings
Oct 09 16:00:22 compute-0 multipathd[99718]: 566.275405 | updated bindings file /etc/multipath/bindings
Oct 09 16:00:23 compute-0 python3.9[99907]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 16:00:23 compute-0 sudo[100059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvdwedjzqarqafkckzhqjupaeourrzrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025623.6825192-1728-147963795528839/AnsiballZ_command.py'
Oct 09 16:00:23 compute-0 sudo[100059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:24 compute-0 python3.9[100061]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 16:00:24 compute-0 sudo[100059]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:24 compute-0 sudo[100224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceawgrhppldlvpndvrajnwvfezkkqvna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025624.4551601-1744-65900634612172/AnsiballZ_systemd.py'
Oct 09 16:00:24 compute-0 sudo[100224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:25 compute-0 python3.9[100226]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 16:00:25 compute-0 systemd[1]: Stopping multipathd container...
Oct 09 16:00:25 compute-0 multipathd[99718]: 568.807652 | multipathd: shut down
Oct 09 16:00:25 compute-0 systemd[1]: libpod-270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66.scope: Deactivated successfully.
Oct 09 16:00:25 compute-0 podman[100230]: 2025-10-09 16:00:25.18259727 +0000 UTC m=+0.079323931 container died 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 09 16:00:25 compute-0 systemd[1]: 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66-60a33dcc951f2a9d.timer: Deactivated successfully.
Oct 09 16:00:25 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66.
Oct 09 16:00:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66-userdata-shm.mount: Deactivated successfully.
Oct 09 16:00:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-93c0f5fa81cf88861a80c562ed08c1ce05eb122f0655942102dc986de5c5aa54-merged.mount: Deactivated successfully.
Oct 09 16:00:25 compute-0 podman[100230]: 2025-10-09 16:00:25.228309775 +0000 UTC m=+0.125036436 container cleanup 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0)
Oct 09 16:00:25 compute-0 podman[100230]: multipathd
Oct 09 16:00:25 compute-0 podman[100259]: multipathd
Oct 09 16:00:25 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 09 16:00:25 compute-0 systemd[1]: Stopped multipathd container.
Oct 09 16:00:25 compute-0 systemd[1]: Starting multipathd container...
Oct 09 16:00:25 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93c0f5fa81cf88861a80c562ed08c1ce05eb122f0655942102dc986de5c5aa54/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 09 16:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93c0f5fa81cf88861a80c562ed08c1ce05eb122f0655942102dc986de5c5aa54/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 09 16:00:25 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66.
Oct 09 16:00:25 compute-0 podman[100272]: 2025-10-09 16:00:25.429641427 +0000 UTC m=+0.111204752 container init 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 09 16:00:25 compute-0 multipathd[100288]: + sudo -E kolla_set_configs
Oct 09 16:00:25 compute-0 sudo[100294]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 09 16:00:25 compute-0 podman[100272]: 2025-10-09 16:00:25.455510539 +0000 UTC m=+0.137073874 container start 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4)
Oct 09 16:00:25 compute-0 sudo[100294]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 09 16:00:25 compute-0 podman[100272]: multipathd
Oct 09 16:00:25 compute-0 systemd[1]: Started multipathd container.
Oct 09 16:00:25 compute-0 sudo[100224]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:25 compute-0 multipathd[100288]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 16:00:25 compute-0 multipathd[100288]: INFO:__main__:Validating config file
Oct 09 16:00:25 compute-0 multipathd[100288]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 16:00:25 compute-0 multipathd[100288]: INFO:__main__:Writing out command to execute
Oct 09 16:00:25 compute-0 sudo[100294]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:25 compute-0 multipathd[100288]: ++ cat /run_command
Oct 09 16:00:25 compute-0 multipathd[100288]: + CMD='/usr/sbin/multipathd -d'
Oct 09 16:00:25 compute-0 multipathd[100288]: + ARGS=
Oct 09 16:00:25 compute-0 multipathd[100288]: + sudo kolla_copy_cacerts
Oct 09 16:00:25 compute-0 podman[100295]: 2025-10-09 16:00:25.518685993 +0000 UTC m=+0.049328800 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 09 16:00:25 compute-0 sudo[100317]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 09 16:00:25 compute-0 sudo[100317]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 09 16:00:25 compute-0 sudo[100317]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:25 compute-0 systemd[1]: 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66-4df5741bfef30d7e.service: Main process exited, code=exited, status=1/FAILURE
Oct 09 16:00:25 compute-0 systemd[1]: 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66-4df5741bfef30d7e.service: Failed with result 'exit-code'.
Oct 09 16:00:25 compute-0 multipathd[100288]: Running command: '/usr/sbin/multipathd -d'
Oct 09 16:00:25 compute-0 multipathd[100288]: + [[ ! -n '' ]]
Oct 09 16:00:25 compute-0 multipathd[100288]: + . kolla_extend_start
Oct 09 16:00:25 compute-0 multipathd[100288]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 09 16:00:25 compute-0 multipathd[100288]: + umask 0022
Oct 09 16:00:25 compute-0 multipathd[100288]: + exec /usr/sbin/multipathd -d
Oct 09 16:00:25 compute-0 multipathd[100288]: 569.198443 | multipathd v0.9.9: start up
Oct 09 16:00:25 compute-0 multipathd[100288]: 569.205878 | reconfigure: setting up paths and maps
Oct 09 16:00:25 compute-0 sudo[100477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzqivinsyqhthwynhxrypdyozfkveqcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025625.6375659-1760-145209290185983/AnsiballZ_file.py'
Oct 09 16:00:25 compute-0 sudo[100477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:26 compute-0 python3.9[100479]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:26 compute-0 sudo[100477]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:26 compute-0 sudo[100629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azkjawsafyfqvbakphcomaucjvzavqcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025626.5898235-1784-281069178550139/AnsiballZ_file.py'
Oct 09 16:00:26 compute-0 sudo[100629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:27 compute-0 python3.9[100631]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 09 16:00:27 compute-0 sudo[100629]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:27 compute-0 sudo[100781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfgnbrjjueawvptzcdndsyfrwofoaiff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025627.2617285-1800-45331947731325/AnsiballZ_modprobe.py'
Oct 09 16:00:27 compute-0 sudo[100781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:27 compute-0 python3.9[100783]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 09 16:00:27 compute-0 kernel: Key type psk registered
Oct 09 16:00:27 compute-0 sudo[100781]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:28 compute-0 sudo[100944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhhvozkdlmvtexsjwslbcvnpzcilgqhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025627.9924617-1816-238528007328213/AnsiballZ_stat.py'
Oct 09 16:00:28 compute-0 sudo[100944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:28 compute-0 python3.9[100946]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:00:28 compute-0 sudo[100944]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:28 compute-0 sudo[101067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncdohaawueunlbilwavyisajwwpqpfnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025627.9924617-1816-238528007328213/AnsiballZ_copy.py'
Oct 09 16:00:28 compute-0 sudo[101067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:29 compute-0 python3.9[101069]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025627.9924617-1816-238528007328213/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:29 compute-0 sudo[101067]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:29 compute-0 sudo[101219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdpdpluaemfhmtvztryfmazrsshrpapu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025629.3529434-1848-62227682495686/AnsiballZ_lineinfile.py'
Oct 09 16:00:29 compute-0 sudo[101219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:29 compute-0 python3.9[101221]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:29 compute-0 sudo[101219]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:30 compute-0 sudo[101371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aujvtkimksyzsttxqojoafmgzytqzyns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025630.088693-1864-125905335687037/AnsiballZ_systemd.py'
Oct 09 16:00:30 compute-0 sudo[101371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:30 compute-0 python3.9[101373]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 16:00:30 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 09 16:00:30 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 09 16:00:30 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 09 16:00:30 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 09 16:00:30 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 09 16:00:30 compute-0 sudo[101371]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:31 compute-0 sudo[101527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcipiipbmvvfeqpterthuvbbkjlzkewf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025631.0284214-1880-112934630765700/AnsiballZ_setup.py'
Oct 09 16:00:31 compute-0 sudo[101527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:31 compute-0 python3.9[101529]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 16:00:31 compute-0 sudo[101527]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:32 compute-0 sudo[101611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcqdoubwugkawdfrfuvffnjtdlxqlvcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025631.0284214-1880-112934630765700/AnsiballZ_dnf.py'
Oct 09 16:00:32 compute-0 sudo[101611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:32 compute-0 python3.9[101613]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 16:00:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:00:35.265 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:00:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:00:35.265 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:00:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:00:35.265 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:00:37 compute-0 podman[101617]: 2025-10-09 16:00:37.82248348 +0000 UTC m=+0.053514791 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:00:38 compute-0 podman[101638]: 2025-10-09 16:00:38.81805244 +0000 UTC m=+0.051993394 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 09 16:00:38 compute-0 systemd[1]: Reloading.
Oct 09 16:00:38 compute-0 systemd-rc-local-generator[101681]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:00:38 compute-0 systemd-sysv-generator[101687]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:00:39 compute-0 systemd[1]: Reloading.
Oct 09 16:00:39 compute-0 systemd-rc-local-generator[101719]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:00:39 compute-0 systemd-sysv-generator[101724]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:00:39 compute-0 systemd-logind[841]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 09 16:00:39 compute-0 systemd-logind[841]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 09 16:00:39 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 09 16:00:39 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 09 16:00:39 compute-0 systemd[1]: Reloading.
Oct 09 16:00:39 compute-0 systemd-rc-local-generator[101815]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:00:39 compute-0 systemd-sysv-generator[101818]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:00:40 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 09 16:00:40 compute-0 sudo[101611]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:41 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 09 16:00:41 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 09 16:00:41 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.689s CPU time.
Oct 09 16:00:41 compute-0 systemd[1]: run-r8f02ef9e144f485c98ab22bb68b04e12.service: Deactivated successfully.
Oct 09 16:00:41 compute-0 sudo[103105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iglythhlpnlfbbqoroankmvguonwxcbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025641.11382-1904-193260299477419/AnsiballZ_file.py'
Oct 09 16:00:41 compute-0 sudo[103105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:41 compute-0 python3.9[103107]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:41 compute-0 sudo[103105]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:42 compute-0 python3.9[103257]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 16:00:43 compute-0 sudo[103411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbimmyiqjepbdmhjgdjleaifmrtgwgjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025642.8208559-1939-48495229830669/AnsiballZ_file.py'
Oct 09 16:00:43 compute-0 sudo[103411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:43 compute-0 python3.9[103413]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:43 compute-0 sudo[103411]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:44 compute-0 sudo[103563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yslyrirafbxbawlsssdpwoylhmeriemk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025643.773375-1961-189984050982391/AnsiballZ_systemd_service.py'
Oct 09 16:00:44 compute-0 sudo[103563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:44 compute-0 podman[103565]: 2025-10-09 16:00:44.498112839 +0000 UTC m=+0.090904756 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 09 16:00:44 compute-0 python3.9[103566]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 16:00:44 compute-0 systemd[1]: Reloading.
Oct 09 16:00:44 compute-0 systemd-rc-local-generator[103617]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:00:44 compute-0 systemd-sysv-generator[103621]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:00:45 compute-0 sudo[103563]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:45 compute-0 python3.9[103776]: ansible-ansible.builtin.service_facts Invoked
Oct 09 16:00:45 compute-0 network[103793]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 09 16:00:45 compute-0 network[103794]: 'network-scripts' will be removed from distribution in near future.
Oct 09 16:00:45 compute-0 network[103795]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 09 16:00:49 compute-0 sudo[104070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvqgixhvsnotjakpwrcggwkpnbrfalmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025649.2647808-1999-141365841256697/AnsiballZ_systemd_service.py'
Oct 09 16:00:49 compute-0 sudo[104070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:49 compute-0 python3.9[104072]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 16:00:49 compute-0 sudo[104070]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:50 compute-0 sudo[104223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgaaasjfracrmdqvtbcijhfvglpcqklq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025650.0535321-1999-66651139720365/AnsiballZ_systemd_service.py'
Oct 09 16:00:50 compute-0 sudo[104223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:50 compute-0 python3.9[104225]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 16:00:50 compute-0 sudo[104223]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:51 compute-0 sudo[104376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-useribbgnyawiyyftdxudyxiyxvcmsih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025650.833929-1999-121654974692670/AnsiballZ_systemd_service.py'
Oct 09 16:00:51 compute-0 sudo[104376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:51 compute-0 python3.9[104378]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 16:00:51 compute-0 sudo[104376]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:51 compute-0 sudo[104529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rujvnrskidjtjsauceuvsifexxjxpuoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025651.627493-1999-248358065178428/AnsiballZ_systemd_service.py'
Oct 09 16:00:51 compute-0 sudo[104529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:52 compute-0 python3.9[104531]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 16:00:52 compute-0 sudo[104529]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:52 compute-0 sudo[104682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgobjgeinarpbgnjtdpcnpgbrnbvepoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025652.294136-1999-166889925703359/AnsiballZ_systemd_service.py'
Oct 09 16:00:52 compute-0 sudo[104682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:52 compute-0 python3.9[104684]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 16:00:52 compute-0 sudo[104682]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:53 compute-0 sudo[104835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldmulgphlpuiqqedlztpnukzxgtpzpiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025653.0062258-1999-133724565709658/AnsiballZ_systemd_service.py'
Oct 09 16:00:53 compute-0 sudo[104835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:53 compute-0 python3.9[104837]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 16:00:53 compute-0 sudo[104835]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:53 compute-0 sudo[104988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqavqwlrmrlnbscnghdkqywzrbevmsqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025653.736639-1999-123502147318172/AnsiballZ_systemd_service.py'
Oct 09 16:00:53 compute-0 sudo[104988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:54 compute-0 python3.9[104990]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 16:00:54 compute-0 sudo[104988]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:54 compute-0 sudo[105141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-labdnhvyseuifhsrpzvmmstlqqmfqhiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025654.4020433-1999-100223929337456/AnsiballZ_systemd_service.py'
Oct 09 16:00:54 compute-0 sudo[105141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:54 compute-0 python3.9[105143]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 16:00:55 compute-0 sudo[105141]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:55 compute-0 sudo[105306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzxfhopjkhupalgddkoqwgwwfpgtpekl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025655.446917-2117-115775330641612/AnsiballZ_file.py'
Oct 09 16:00:55 compute-0 sudo[105306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:55 compute-0 podman[105268]: 2025-10-09 16:00:55.73349128 +0000 UTC m=+0.063779855 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 09 16:00:55 compute-0 python3.9[105315]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:55 compute-0 sudo[105306]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:56 compute-0 sudo[105465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdgatuahwuafmesotwmavhfozlejgvsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025656.0742598-2117-231189133663295/AnsiballZ_file.py'
Oct 09 16:00:56 compute-0 sudo[105465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:56 compute-0 python3.9[105467]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:56 compute-0 sudo[105465]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:56 compute-0 sudo[105617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbmwygmzpgvukvtoeqvnzkuualfirlcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025656.6237652-2117-190104378092911/AnsiballZ_file.py'
Oct 09 16:00:56 compute-0 sudo[105617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:57 compute-0 python3.9[105619]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:57 compute-0 sudo[105617]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:57 compute-0 sudo[105769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruhqppwhodeuccnjimytjkstwecqfupc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025657.1767366-2117-221013542261308/AnsiballZ_file.py'
Oct 09 16:00:57 compute-0 sudo[105769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:57 compute-0 python3.9[105771]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:57 compute-0 sudo[105769]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:57 compute-0 sudo[105921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxucrtoebwgtslwyfllalucztmrumrdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025657.7444437-2117-102099693197320/AnsiballZ_file.py'
Oct 09 16:00:57 compute-0 sudo[105921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:58 compute-0 python3.9[105923]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:58 compute-0 sudo[105921]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:58 compute-0 sudo[106073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nugarzifjmhqcbnrmowqcmwtpsjeuyzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025658.3396823-2117-140513730827712/AnsiballZ_file.py'
Oct 09 16:00:58 compute-0 sudo[106073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:58 compute-0 python3.9[106075]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:58 compute-0 sudo[106073]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:59 compute-0 sudo[106225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhxupvximottzoqwehwqifexrewblwoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025658.979511-2117-236899587179746/AnsiballZ_file.py'
Oct 09 16:00:59 compute-0 sudo[106225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:00:59 compute-0 python3.9[106227]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:00:59 compute-0 sudo[106225]: pam_unix(sudo:session): session closed for user root
Oct 09 16:00:59 compute-0 sudo[106377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxhahujlmxrncdmocbkcewgpiflggrir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025659.5759811-2117-215440785070615/AnsiballZ_file.py'
Oct 09 16:00:59 compute-0 sudo[106377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:00 compute-0 python3.9[106379]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:01:00 compute-0 sudo[106377]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:00 compute-0 sudo[106529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubnlvnocllukncsycevddvqhlwyuscqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025660.233634-2231-188715535745857/AnsiballZ_file.py'
Oct 09 16:01:00 compute-0 sudo[106529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:00 compute-0 python3.9[106531]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:01:00 compute-0 sudo[106529]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:01 compute-0 sudo[106681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqqkpzkiefgzavcpvrywdmgyimziooeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025660.8153994-2231-58196880374934/AnsiballZ_file.py'
Oct 09 16:01:01 compute-0 sudo[106681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:01 compute-0 python3.9[106683]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:01:01 compute-0 sudo[106681]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:01 compute-0 sudo[106833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcuhhepcafajrsbkzotrilfnismhzsba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025661.3822768-2231-200503661828984/AnsiballZ_file.py'
Oct 09 16:01:01 compute-0 sudo[106833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:01 compute-0 python3.9[106835]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:01:01 compute-0 sudo[106833]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:01 compute-0 CROND[106861]: (root) CMD (run-parts /etc/cron.hourly)
Oct 09 16:01:01 compute-0 run-parts[106864]: (/etc/cron.hourly) starting 0anacron
Oct 09 16:01:01 compute-0 anacron[106873]: Anacron started on 2025-10-09
Oct 09 16:01:01 compute-0 anacron[106873]: Will run job `cron.daily' in 18 min.
Oct 09 16:01:01 compute-0 anacron[106873]: Will run job `cron.weekly' in 38 min.
Oct 09 16:01:01 compute-0 anacron[106873]: Will run job `cron.monthly' in 58 min.
Oct 09 16:01:01 compute-0 anacron[106873]: Jobs will be executed sequentially
Oct 09 16:01:01 compute-0 run-parts[106875]: (/etc/cron.hourly) finished 0anacron
Oct 09 16:01:01 compute-0 CROND[106859]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 09 16:01:02 compute-0 sudo[107000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpzfrazdhkzxvkvlxwrlsjycwqeargwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025661.9549813-2231-149273728507618/AnsiballZ_file.py'
Oct 09 16:01:02 compute-0 sudo[107000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:02 compute-0 python3.9[107002]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:01:02 compute-0 sudo[107000]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:02 compute-0 sudo[107152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nehseincscmutiiouvpwnlwmgjttmxok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025662.569316-2231-124871976605523/AnsiballZ_file.py'
Oct 09 16:01:02 compute-0 sudo[107152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:03 compute-0 python3.9[107154]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:01:03 compute-0 sudo[107152]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:03 compute-0 sudo[107304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xejormovksllotxyxgqyvinrgprpyvka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025663.18806-2231-136640881351592/AnsiballZ_file.py'
Oct 09 16:01:03 compute-0 sudo[107304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:03 compute-0 python3.9[107306]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:01:03 compute-0 sudo[107304]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:04 compute-0 sudo[107456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acnpbitopmbfqdhrzbomyhihzhemylqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025663.7860954-2231-104907164286925/AnsiballZ_file.py'
Oct 09 16:01:04 compute-0 sudo[107456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:04 compute-0 python3.9[107458]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:01:04 compute-0 sudo[107456]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:04 compute-0 sudo[107608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkxwzhormdcmpdvbdekovnjlwncbtfbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025664.4252524-2231-253609483306928/AnsiballZ_file.py'
Oct 09 16:01:04 compute-0 sudo[107608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:04 compute-0 python3.9[107610]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:01:04 compute-0 sudo[107608]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:05 compute-0 sudo[107760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irqlkmuklbgbtdhfvisefjiqmpevrpbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025665.4988744-2347-84755416175074/AnsiballZ_command.py'
Oct 09 16:01:05 compute-0 sudo[107760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:05 compute-0 python3.9[107762]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 16:01:05 compute-0 sudo[107760]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:06 compute-0 python3.9[107914]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 09 16:01:07 compute-0 sudo[108064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vktytzsgirvyhmsobduuurjvqghvsuts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025667.0995314-2383-221034554139628/AnsiballZ_systemd_service.py'
Oct 09 16:01:07 compute-0 sudo[108064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:07 compute-0 python3.9[108066]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 16:01:07 compute-0 systemd[1]: Reloading.
Oct 09 16:01:07 compute-0 systemd-rc-local-generator[108093]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:01:07 compute-0 systemd-sysv-generator[108096]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:01:07 compute-0 sudo[108064]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:08 compute-0 podman[108101]: 2025-10-09 16:01:08.034079987 +0000 UTC m=+0.072727345 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 09 16:01:08 compute-0 sudo[108267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcyzhxvoqdfitlnnkiaruzbodknjbvdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025668.1450384-2399-213537190233540/AnsiballZ_command.py'
Oct 09 16:01:08 compute-0 sudo[108267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:08 compute-0 python3.9[108269]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 16:01:08 compute-0 sudo[108267]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:08 compute-0 sudo[108437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhttvbommhufbkarxbktxscedjcytfxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025668.7320118-2399-202006746694995/AnsiballZ_command.py'
Oct 09 16:01:08 compute-0 sudo[108437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:08 compute-0 podman[108394]: 2025-10-09 16:01:08.97333986 +0000 UTC m=+0.048537148 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007)
Oct 09 16:01:09 compute-0 python3.9[108442]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 16:01:09 compute-0 sudo[108437]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:09 compute-0 sudo[108594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbdxgkyihbbzvjgceewgwtmvesxxhqzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025669.2985404-2399-82278751191660/AnsiballZ_command.py'
Oct 09 16:01:09 compute-0 sudo[108594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:09 compute-0 python3.9[108596]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 16:01:09 compute-0 sudo[108594]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:10 compute-0 sudo[108747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koyaluryfmauifjbxlsacsjbtcgvcdqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025670.0012434-2399-253267375628132/AnsiballZ_command.py'
Oct 09 16:01:10 compute-0 sudo[108747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:10 compute-0 python3.9[108749]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 16:01:10 compute-0 sudo[108747]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:10 compute-0 sudo[108900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oanpikallnxyalyeddycrwmlwwpeqocl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025670.5478334-2399-171660174157776/AnsiballZ_command.py'
Oct 09 16:01:10 compute-0 sudo[108900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:10 compute-0 python3.9[108902]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 16:01:11 compute-0 sudo[108900]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:11 compute-0 sudo[109053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glmxuueepqkgzpzmdwnegxunkgeumjcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025671.1309955-2399-116020578424410/AnsiballZ_command.py'
Oct 09 16:01:11 compute-0 sudo[109053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:11 compute-0 python3.9[109055]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 16:01:11 compute-0 sudo[109053]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:11 compute-0 sudo[109206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uootvsmesmruvcgquumdlpaidvhsxuqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025671.6673658-2399-223838592174732/AnsiballZ_command.py'
Oct 09 16:01:11 compute-0 sudo[109206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:12 compute-0 python3.9[109208]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 16:01:12 compute-0 sudo[109206]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:12 compute-0 sudo[109359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsphthnkiurpjhuzkvswhgffwbtakkdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025672.2219057-2399-125497001642814/AnsiballZ_command.py'
Oct 09 16:01:12 compute-0 sudo[109359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:12 compute-0 python3.9[109361]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 16:01:12 compute-0 sudo[109359]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:14 compute-0 sudo[109512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixfnirdfqzgctnqrvlcppkrcckfjgrrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025674.2711027-2542-65364937648566/AnsiballZ_file.py'
Oct 09 16:01:14 compute-0 sudo[109512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:14 compute-0 python3.9[109514]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:01:14 compute-0 sudo[109512]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:14 compute-0 podman[109515]: 2025-10-09 16:01:14.79076021 +0000 UTC m=+0.069015747 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 09 16:01:15 compute-0 sudo[109691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhodkbocnarxketponoswgxkgabizfkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025674.8411796-2542-61448280730435/AnsiballZ_file.py'
Oct 09 16:01:15 compute-0 sudo[109691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:15 compute-0 python3.9[109693]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:01:15 compute-0 sudo[109691]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:15 compute-0 sudo[109843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlhmcgmhyytbmenovdgorcmamedyzctv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025675.3725405-2542-263608381926062/AnsiballZ_file.py'
Oct 09 16:01:15 compute-0 sudo[109843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:15 compute-0 python3.9[109845]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:01:15 compute-0 sudo[109843]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:16 compute-0 sudo[109995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcmwbtjqbyvzsnrmlezidhpvmhszqlfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025676.2807677-2586-55947539183274/AnsiballZ_file.py'
Oct 09 16:01:16 compute-0 sudo[109995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:16 compute-0 python3.9[109997]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:01:16 compute-0 sudo[109995]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:17 compute-0 sudo[110147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cunrwvayuiopmvevzdqukpoktalanefr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025676.851052-2586-13630878704343/AnsiballZ_file.py'
Oct 09 16:01:17 compute-0 sudo[110147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:17 compute-0 python3.9[110149]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:01:17 compute-0 sudo[110147]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:17 compute-0 sudo[110299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tydapkyyhfawhhkxeskzpubmhgucgzsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025677.4232316-2586-72116614417679/AnsiballZ_file.py'
Oct 09 16:01:17 compute-0 sudo[110299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:17 compute-0 python3.9[110301]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:01:17 compute-0 sudo[110299]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:18 compute-0 sudo[110451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xckoxbqmwwkoaiowsncovgmjduxxttfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025677.9993117-2586-216396207813218/AnsiballZ_file.py'
Oct 09 16:01:18 compute-0 sudo[110451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:18 compute-0 python3.9[110453]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:01:18 compute-0 sudo[110451]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:18 compute-0 sudo[110603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-encydcdbyshxblctzljzxvmffzuedbkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025678.577922-2586-267325593438081/AnsiballZ_file.py'
Oct 09 16:01:18 compute-0 sudo[110603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:19 compute-0 python3.9[110605]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:01:19 compute-0 sudo[110603]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:19 compute-0 sudo[110755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohasgawjizomohxquwdtfatrglnxzctd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025679.1428857-2586-9998670709360/AnsiballZ_file.py'
Oct 09 16:01:19 compute-0 sudo[110755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:19 compute-0 python3.9[110757]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:01:19 compute-0 sudo[110755]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:19 compute-0 sudo[110907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fspkkrkkaodcxlyzqdvlxilhbvpbvohu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025679.7799485-2586-198651240505634/AnsiballZ_file.py'
Oct 09 16:01:19 compute-0 sudo[110907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:20 compute-0 python3.9[110909]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:01:20 compute-0 sudo[110907]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:20 compute-0 sudo[111059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvzpjtbzrrkalmckmvxgfogaivxpffzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025680.3001223-2586-80665447972299/AnsiballZ_file.py'
Oct 09 16:01:20 compute-0 sudo[111059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:20 compute-0 python3.9[111061]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:01:20 compute-0 sudo[111059]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:21 compute-0 sudo[111211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwwnaloseprdwyukdxbeabrkypmxftaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025680.8794558-2586-164331591817678/AnsiballZ_file.py'
Oct 09 16:01:21 compute-0 sudo[111211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:21 compute-0 python3.9[111213]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:01:21 compute-0 sudo[111211]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:25 compute-0 podman[111238]: 2025-10-09 16:01:25.839301735 +0000 UTC m=+0.078325462 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 09 16:01:26 compute-0 sudo[111383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inovljexqyyfpcxrsvodoyhrdtmwukna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025686.2201195-2851-64083902160929/AnsiballZ_getent.py'
Oct 09 16:01:26 compute-0 sudo[111383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:26 compute-0 python3.9[111385]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 09 16:01:26 compute-0 sudo[111383]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:27 compute-0 sudo[111536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fevsmwuktgbamacwkqlpkuitonirjzsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025687.1328928-2867-161815352322503/AnsiballZ_group.py'
Oct 09 16:01:27 compute-0 sudo[111536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:27 compute-0 python3.9[111538]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 09 16:01:27 compute-0 groupadd[111539]: group added to /etc/group: name=nova, GID=42436
Oct 09 16:01:27 compute-0 groupadd[111539]: group added to /etc/gshadow: name=nova
Oct 09 16:01:27 compute-0 groupadd[111539]: new group: name=nova, GID=42436
Oct 09 16:01:27 compute-0 sudo[111536]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:28 compute-0 sudo[111694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhcgqlljcfavfwmkbsuywwaoecaidirz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025688.0427697-2883-50714177472003/AnsiballZ_user.py'
Oct 09 16:01:28 compute-0 sudo[111694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:28 compute-0 python3.9[111696]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 09 16:01:28 compute-0 useradd[111698]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Oct 09 16:01:28 compute-0 useradd[111698]: add 'nova' to group 'libvirt'
Oct 09 16:01:28 compute-0 useradd[111698]: add 'nova' to shadow group 'libvirt'
Oct 09 16:01:28 compute-0 sudo[111694]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:29 compute-0 sshd-session[111729]: Accepted publickey for zuul from 192.168.122.30 port 51412 ssh2: ECDSA SHA256:2Vdz7kVNDZnmAnEBdeIC9De7MGoQwU7bxSCyJABiYXo
Oct 09 16:01:29 compute-0 systemd-logind[841]: New session 10 of user zuul.
Oct 09 16:01:29 compute-0 systemd[1]: Started Session 10 of User zuul.
Oct 09 16:01:29 compute-0 sshd-session[111729]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 16:01:29 compute-0 sshd-session[111732]: Received disconnect from 192.168.122.30 port 51412:11: disconnected by user
Oct 09 16:01:29 compute-0 sshd-session[111732]: Disconnected from user zuul 192.168.122.30 port 51412
Oct 09 16:01:29 compute-0 sshd-session[111729]: pam_unix(sshd:session): session closed for user zuul
Oct 09 16:01:29 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Oct 09 16:01:29 compute-0 systemd-logind[841]: Session 10 logged out. Waiting for processes to exit.
Oct 09 16:01:29 compute-0 systemd-logind[841]: Removed session 10.
Oct 09 16:01:30 compute-0 python3.9[111882]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:01:30 compute-0 python3.9[112003]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760025689.9506352-2933-237307831240288/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:01:31 compute-0 python3.9[112153]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:01:32 compute-0 python3.9[112229]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:01:32 compute-0 python3.9[112379]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:01:33 compute-0 python3.9[112500]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760025692.2198231-2933-238731932474779/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:01:33 compute-0 python3.9[112650]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:01:34 compute-0 python3.9[112771]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760025693.3331103-2933-154211597958775/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:01:34 compute-0 python3.9[112921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:01:35 compute-0 python3.9[113042]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760025694.356828-2933-229099816100054/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:01:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:01:35.266 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:01:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:01:35.267 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:01:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:01:35.267 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:01:35 compute-0 sudo[113193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndwczmmyojgronsmcfbnhbcbikqscrpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025695.7167778-3071-226908612284857/AnsiballZ_file.py'
Oct 09 16:01:35 compute-0 sudo[113193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:36 compute-0 python3.9[113195]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:01:36 compute-0 sudo[113193]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:36 compute-0 sudo[113345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhsijvljrqbpmadcyyyfsjiwjffdfgza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025696.4094026-3087-214287523577576/AnsiballZ_copy.py'
Oct 09 16:01:36 compute-0 sudo[113345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:37 compute-0 python3.9[113347]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:01:37 compute-0 sudo[113345]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:37 compute-0 sudo[113497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpbhfniowqmcrfacwbyyutbqmmvidicm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025697.2792356-3103-154878146389697/AnsiballZ_stat.py'
Oct 09 16:01:37 compute-0 sudo[113497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:37 compute-0 python3.9[113499]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 16:01:37 compute-0 sudo[113497]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:38 compute-0 sudo[113659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqubnrtzesgsxibyzrapcchpndtcvsht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025697.9977345-3119-159427192102444/AnsiballZ_stat.py'
Oct 09 16:01:38 compute-0 sudo[113659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:38 compute-0 podman[113623]: 2025-10-09 16:01:38.293622326 +0000 UTC m=+0.062424837 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 09 16:01:38 compute-0 python3.9[113668]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:01:38 compute-0 sudo[113659]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:38 compute-0 sudo[113791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycxcxmwmbmsbeodlhjnnbxonlwyhitaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025697.9977345-3119-159427192102444/AnsiballZ_copy.py'
Oct 09 16:01:38 compute-0 sudo[113791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:38 compute-0 python3.9[113793]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1760025697.9977345-3119-159427192102444/.source _original_basename=.y8bq0a52 follow=False checksum=dc91d341c8b2a292dd94fd91b5bca7a266365563 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 09 16:01:38 compute-0 sudo[113791]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:39 compute-0 podman[113919]: 2025-10-09 16:01:39.740215386 +0000 UTC m=+0.069151342 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.4)
Oct 09 16:01:39 compute-0 python3.9[113952]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 16:01:40 compute-0 python3.9[114118]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:01:41 compute-0 python3.9[114239]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760025700.128827-3171-100578295185416/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=cc0a521e5aad4e060694b2a2142b67ab34d4d1ca backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:01:41 compute-0 python3.9[114389]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:01:42 compute-0 python3.9[114510]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760025701.2872388-3201-275886802009199/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=87426ce5bc30fc4bf0528016c02e1955556966e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:01:42 compute-0 sudo[114660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlsdrcrwdkncxoraanwaspgemcggwaes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025702.7510786-3235-148696312194542/AnsiballZ_container_config_data.py'
Oct 09 16:01:42 compute-0 sudo[114660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:43 compute-0 python3.9[114662]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 09 16:01:43 compute-0 sudo[114660]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:43 compute-0 sudo[114812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjcoebmzutghvznjlohlqtulkanvjjrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025703.477739-3253-15299837396302/AnsiballZ_container_config_hash.py'
Oct 09 16:01:43 compute-0 sudo[114812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:44 compute-0 python3.9[114814]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 16:01:44 compute-0 sudo[114812]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:44 compute-0 sudo[114964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfagslofdwglrdrkvldsbelbyjojnfdv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760025704.3047414-3273-16581713327568/AnsiballZ_edpm_container_manage.py'
Oct 09 16:01:44 compute-0 sudo[114964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:44 compute-0 python3[114966]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 16:01:44 compute-0 podman[115004]: 2025-10-09 16:01:44.953776443 +0000 UTC m=+0.057858813 container create 9bcd9f4459a13a02afd9a7b9e923231faae7c9aa70a3053016bee2cac7e1c7ae (image=38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=nova_compute_init, tcib_managed=true, io.buildah.version=1.41.4, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, managed_by=edpm_ansible, config_data={'image': '38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Oct 09 16:01:44 compute-0 podman[115004]: 2025-10-09 16:01:44.92020176 +0000 UTC m=+0.024284200 image pull cd5905b6fc0830f85f5c0879ee6a298b3c8fd434f4a69d7b34a31ab89642dfe8 38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest
Oct 09 16:01:44 compute-0 python3[114966]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': '38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z 38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 09 16:01:45 compute-0 sudo[114964]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:45 compute-0 sudo[115205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yakhdtwzqtgsypkdagsedqfjzcxblfgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025705.325962-3289-153434528854395/AnsiballZ_stat.py'
Oct 09 16:01:45 compute-0 sudo[115205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:45 compute-0 podman[115166]: 2025-10-09 16:01:45.645338503 +0000 UTC m=+0.095206056 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:01:45 compute-0 python3.9[115213]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 16:01:45 compute-0 sudo[115205]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:46 compute-0 sudo[115372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkxreovvcmwrpxwrfcjsyxmrbjpwkpkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025706.3022797-3313-136946371853691/AnsiballZ_container_config_data.py'
Oct 09 16:01:46 compute-0 sudo[115372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:46 compute-0 python3.9[115374]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 09 16:01:46 compute-0 sudo[115372]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:47 compute-0 sudo[115524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itprlvxnftyccbljtzbkfuikpuuttgzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025707.1161366-3331-180036829727326/AnsiballZ_container_config_hash.py'
Oct 09 16:01:47 compute-0 sudo[115524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:47 compute-0 python3.9[115526]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 16:01:47 compute-0 sudo[115524]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:48 compute-0 sudo[115676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buwmqgkdzdbjjkpnocgumuklpptxcgww ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760025708.1378744-3351-166919890151818/AnsiballZ_edpm_container_manage.py'
Oct 09 16:01:48 compute-0 sudo[115676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:48 compute-0 python3[115678]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 16:01:48 compute-0 podman[115714]: 2025-10-09 16:01:48.808350846 +0000 UTC m=+0.044361915 container create b42118ce4ffd81615ea0a837a99c15cbee63b4dacfe6edacbcebdb2d724fe231 (image=38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, config_data={'image': '38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, container_name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 09 16:01:48 compute-0 podman[115714]: 2025-10-09 16:01:48.7866584 +0000 UTC m=+0.022669499 image pull cd5905b6fc0830f85f5c0879ee6a298b3c8fd434f4a69d7b34a31ab89642dfe8 38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest
Oct 09 16:01:48 compute-0 python3[115678]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': '38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro 38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest kolla_start
Oct 09 16:01:48 compute-0 sudo[115676]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:49 compute-0 sudo[115902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxxsunubginfzqewvpyicvqxbxffbtzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025709.2951481-3367-150545161113119/AnsiballZ_stat.py'
Oct 09 16:01:49 compute-0 sudo[115902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:49 compute-0 python3.9[115904]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 16:01:49 compute-0 sudo[115902]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:50 compute-0 sudo[116056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qavvyafqdovkyufiktetobaipnncqzzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025710.0984464-3385-53670724471079/AnsiballZ_file.py'
Oct 09 16:01:50 compute-0 sudo[116056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:50 compute-0 python3.9[116058]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:01:50 compute-0 sudo[116056]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:50 compute-0 sudo[116207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmqyhcamccbeublpmhkqohrzwfpvojwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025710.5807557-3385-113440592585401/AnsiballZ_copy.py'
Oct 09 16:01:50 compute-0 sudo[116207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:51 compute-0 python3.9[116209]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760025710.5807557-3385-113440592585401/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:01:51 compute-0 sudo[116207]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:51 compute-0 sudo[116283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkhxudfpjtgnbgkjydkntmxztbiuhasr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025710.5807557-3385-113440592585401/AnsiballZ_systemd.py'
Oct 09 16:01:51 compute-0 sudo[116283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:51 compute-0 python3.9[116285]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 16:01:51 compute-0 systemd[1]: Reloading.
Oct 09 16:01:51 compute-0 systemd-rc-local-generator[116309]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:01:51 compute-0 systemd-sysv-generator[116316]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:01:52 compute-0 sudo[116283]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:52 compute-0 sudo[116394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvudsukabaipunjbqqkbxhncsoerehak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025710.5807557-3385-113440592585401/AnsiballZ_systemd.py'
Oct 09 16:01:52 compute-0 sudo[116394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:52 compute-0 python3.9[116396]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 16:01:52 compute-0 systemd[1]: Reloading.
Oct 09 16:01:52 compute-0 systemd-rc-local-generator[116427]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:01:52 compute-0 systemd-sysv-generator[116431]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:01:53 compute-0 systemd[1]: Starting nova_compute container...
Oct 09 16:01:53 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3553245eaf8532ca6ca9d7e0bdbde954830234175ed80db890cde684d1b7ad7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 09 16:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3553245eaf8532ca6ca9d7e0bdbde954830234175ed80db890cde684d1b7ad7/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 09 16:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3553245eaf8532ca6ca9d7e0bdbde954830234175ed80db890cde684d1b7ad7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 09 16:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3553245eaf8532ca6ca9d7e0bdbde954830234175ed80db890cde684d1b7ad7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 09 16:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3553245eaf8532ca6ca9d7e0bdbde954830234175ed80db890cde684d1b7ad7/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 09 16:01:53 compute-0 podman[116436]: 2025-10-09 16:01:53.226679322 +0000 UTC m=+0.146570721 container init b42118ce4ffd81615ea0a837a99c15cbee63b4dacfe6edacbcebdb2d724fe231 (image=38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': '38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.4)
Oct 09 16:01:53 compute-0 podman[116436]: 2025-10-09 16:01:53.23765341 +0000 UTC m=+0.157544789 container start b42118ce4ffd81615ea0a837a99c15cbee63b4dacfe6edacbcebdb2d724fe231 (image=38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': '38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, container_name=nova_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 09 16:01:53 compute-0 podman[116436]: nova_compute
Oct 09 16:01:53 compute-0 nova_compute[116452]: + sudo -E kolla_set_configs
Oct 09 16:01:53 compute-0 systemd[1]: Started nova_compute container.
Oct 09 16:01:53 compute-0 sudo[116394]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Validating config file
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Copying service configuration files
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Deleting /etc/ceph
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Creating directory /etc/ceph
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Setting permission for /etc/ceph
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Writing out command to execute
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 09 16:01:53 compute-0 nova_compute[116452]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 09 16:01:53 compute-0 nova_compute[116452]: ++ cat /run_command
Oct 09 16:01:53 compute-0 nova_compute[116452]: + CMD=nova-compute
Oct 09 16:01:53 compute-0 nova_compute[116452]: + ARGS=
Oct 09 16:01:53 compute-0 nova_compute[116452]: + sudo kolla_copy_cacerts
Oct 09 16:01:53 compute-0 nova_compute[116452]: + [[ ! -n '' ]]
Oct 09 16:01:53 compute-0 nova_compute[116452]: + . kolla_extend_start
Oct 09 16:01:53 compute-0 nova_compute[116452]: + echo 'Running command: '\''nova-compute'\'''
Oct 09 16:01:53 compute-0 nova_compute[116452]: Running command: 'nova-compute'
Oct 09 16:01:53 compute-0 nova_compute[116452]: + umask 0022
Oct 09 16:01:53 compute-0 nova_compute[116452]: + exec nova-compute
Oct 09 16:01:54 compute-0 python3.9[116613]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 16:01:55 compute-0 python3.9[116763]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 16:01:55 compute-0 python3.9[116913]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 16:01:56 compute-0 nova_compute[116452]: 2025-10-09 16:01:56.173 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.12/site-packages/os_vif/__init__.py:44
Oct 09 16:01:56 compute-0 nova_compute[116452]: 2025-10-09 16:01:56.174 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.12/site-packages/os_vif/__init__.py:44
Oct 09 16:01:56 compute-0 nova_compute[116452]: 2025-10-09 16:01:56.174 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.12/site-packages/os_vif/__init__.py:44
Oct 09 16:01:56 compute-0 nova_compute[116452]: 2025-10-09 16:01:56.174 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 09 16:01:56 compute-0 sudo[117076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztrpgrxznawqaxoaqkklwmouecxlreew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025716.0890417-3505-192190548603123/AnsiballZ_podman_container.py'
Oct 09 16:01:56 compute-0 sudo[117076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:56 compute-0 podman[117039]: 2025-10-09 16:01:56.356190194 +0000 UTC m=+0.056620303 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Oct 09 16:01:56 compute-0 nova_compute[116452]: 2025-10-09 16:01:56.360 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:01:56 compute-0 nova_compute[116452]: 2025-10-09 16:01:56.379 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.019s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:01:56 compute-0 nova_compute[116452]: 2025-10-09 16:01:56.452 2 INFO oslo_service.periodic_task [-] Skipping periodic task _heal_instance_info_cache because its interval is negative
Oct 09 16:01:56 compute-0 nova_compute[116452]: 2025-10-09 16:01:56.454 2 WARNING oslo_config.cfg [None req-6dbb48b0-74b6-4d67-a093-740292f25c55 - - - - - -] Deprecated: Option "heartbeat_in_pthread" from group "oslo_messaging_rabbit" is deprecated for removal (The option is related to Eventlet which will be removed. In addition this has never worked as expected with services using eventlet for core service framework.).  Its value may be silently ignored in the future.
Oct 09 16:01:56 compute-0 python3.9[117086]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 09 16:01:56 compute-0 sudo[117076]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:57 compute-0 sudo[117262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bybhxknkewfslyggurknzvwjgabsljqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025717.169919-3521-71887631599136/AnsiballZ_systemd.py'
Oct 09 16:01:57 compute-0 sudo[117262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:57 compute-0 nova_compute[116452]: 2025-10-09 16:01:57.530 2 INFO nova.virt.driver [None req-6dbb48b0-74b6-4d67-a093-740292f25c55 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 09 16:01:57 compute-0 rsyslogd[1282]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 16:01:57 compute-0 rsyslogd[1282]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 16:01:57 compute-0 python3.9[117264]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 16:01:57 compute-0 systemd[1]: Stopping nova_compute container...
Oct 09 16:01:57 compute-0 systemd[1]: libpod-b42118ce4ffd81615ea0a837a99c15cbee63b4dacfe6edacbcebdb2d724fe231.scope: Deactivated successfully.
Oct 09 16:01:57 compute-0 systemd[1]: libpod-b42118ce4ffd81615ea0a837a99c15cbee63b4dacfe6edacbcebdb2d724fe231.scope: Consumed 2.615s CPU time.
Oct 09 16:01:57 compute-0 podman[117271]: 2025-10-09 16:01:57.808827664 +0000 UTC m=+0.053487434 container died b42118ce4ffd81615ea0a837a99c15cbee63b4dacfe6edacbcebdb2d724fe231 (image=38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, config_data={'image': '38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm)
Oct 09 16:01:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b42118ce4ffd81615ea0a837a99c15cbee63b4dacfe6edacbcebdb2d724fe231-userdata-shm.mount: Deactivated successfully.
Oct 09 16:01:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3553245eaf8532ca6ca9d7e0bdbde954830234175ed80db890cde684d1b7ad7-merged.mount: Deactivated successfully.
Oct 09 16:01:57 compute-0 podman[117271]: 2025-10-09 16:01:57.871119857 +0000 UTC m=+0.115779627 container cleanup b42118ce4ffd81615ea0a837a99c15cbee63b4dacfe6edacbcebdb2d724fe231 (image=38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_managed=true, config_data={'image': '38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS)
Oct 09 16:01:57 compute-0 podman[117271]: nova_compute
Oct 09 16:01:57 compute-0 podman[117302]: nova_compute
Oct 09 16:01:57 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 09 16:01:57 compute-0 systemd[1]: Stopped nova_compute container.
Oct 09 16:01:57 compute-0 systemd[1]: Starting nova_compute container...
Oct 09 16:01:58 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:01:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3553245eaf8532ca6ca9d7e0bdbde954830234175ed80db890cde684d1b7ad7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 09 16:01:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3553245eaf8532ca6ca9d7e0bdbde954830234175ed80db890cde684d1b7ad7/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 09 16:01:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3553245eaf8532ca6ca9d7e0bdbde954830234175ed80db890cde684d1b7ad7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 09 16:01:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3553245eaf8532ca6ca9d7e0bdbde954830234175ed80db890cde684d1b7ad7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 09 16:01:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3553245eaf8532ca6ca9d7e0bdbde954830234175ed80db890cde684d1b7ad7/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 09 16:01:58 compute-0 podman[117315]: 2025-10-09 16:01:58.031873868 +0000 UTC m=+0.076260256 container init b42118ce4ffd81615ea0a837a99c15cbee63b4dacfe6edacbcebdb2d724fe231 (image=38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_data={'image': '38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:01:58 compute-0 podman[117315]: 2025-10-09 16:01:58.037175265 +0000 UTC m=+0.081561623 container start b42118ce4ffd81615ea0a837a99c15cbee63b4dacfe6edacbcebdb2d724fe231 (image=38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': '38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 09 16:01:58 compute-0 podman[117315]: nova_compute
Oct 09 16:01:58 compute-0 nova_compute[117331]: + sudo -E kolla_set_configs
Oct 09 16:01:58 compute-0 systemd[1]: Started nova_compute container.
Oct 09 16:01:58 compute-0 sudo[117262]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Validating config file
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Copying service configuration files
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Deleting /etc/ceph
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Creating directory /etc/ceph
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Setting permission for /etc/ceph
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Writing out command to execute
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 09 16:01:58 compute-0 nova_compute[117331]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 09 16:01:58 compute-0 nova_compute[117331]: ++ cat /run_command
Oct 09 16:01:58 compute-0 nova_compute[117331]: + CMD=nova-compute
Oct 09 16:01:58 compute-0 nova_compute[117331]: + ARGS=
Oct 09 16:01:58 compute-0 nova_compute[117331]: + sudo kolla_copy_cacerts
Oct 09 16:01:58 compute-0 nova_compute[117331]: + [[ ! -n '' ]]
Oct 09 16:01:58 compute-0 nova_compute[117331]: + . kolla_extend_start
Oct 09 16:01:58 compute-0 nova_compute[117331]: Running command: 'nova-compute'
Oct 09 16:01:58 compute-0 nova_compute[117331]: + echo 'Running command: '\''nova-compute'\'''
Oct 09 16:01:58 compute-0 nova_compute[117331]: + umask 0022
Oct 09 16:01:58 compute-0 nova_compute[117331]: + exec nova-compute
Oct 09 16:01:58 compute-0 sudo[117492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmxzvhalwrzhrvueeedocejeojkaklvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025718.3543398-3539-148357482983785/AnsiballZ_podman_container.py'
Oct 09 16:01:58 compute-0 sudo[117492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:01:58 compute-0 python3.9[117494]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 09 16:01:59 compute-0 systemd[1]: Started libpod-conmon-9bcd9f4459a13a02afd9a7b9e923231faae7c9aa70a3053016bee2cac7e1c7ae.scope.
Oct 09 16:01:59 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f775ef9caa6e8bab190f33f4f315509d53a1ba6aab5aea9df39f155463c359b/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 09 16:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f775ef9caa6e8bab190f33f4f315509d53a1ba6aab5aea9df39f155463c359b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 09 16:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f775ef9caa6e8bab190f33f4f315509d53a1ba6aab5aea9df39f155463c359b/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 09 16:01:59 compute-0 podman[117516]: 2025-10-09 16:01:59.070524259 +0000 UTC m=+0.116960795 container init 9bcd9f4459a13a02afd9a7b9e923231faae7c9aa70a3053016bee2cac7e1c7ae (image=38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute_init, tcib_managed=true, config_data={'image': '38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=nova_compute_init, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 09 16:01:59 compute-0 podman[117516]: 2025-10-09 16:01:59.084596854 +0000 UTC m=+0.131033340 container start 9bcd9f4459a13a02afd9a7b9e923231faae7c9aa70a3053016bee2cac7e1c7ae (image=38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute_init, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_data={'image': '38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:01:59 compute-0 python3.9[117494]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 09 16:01:59 compute-0 nova_compute_init[117537]: INFO:nova_statedir:Applying nova statedir ownership
Oct 09 16:01:59 compute-0 nova_compute_init[117537]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 09 16:01:59 compute-0 nova_compute_init[117537]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 09 16:01:59 compute-0 nova_compute_init[117537]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 09 16:01:59 compute-0 nova_compute_init[117537]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 09 16:01:59 compute-0 nova_compute_init[117537]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 09 16:01:59 compute-0 nova_compute_init[117537]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 09 16:01:59 compute-0 nova_compute_init[117537]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 09 16:01:59 compute-0 nova_compute_init[117537]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 09 16:01:59 compute-0 nova_compute_init[117537]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 09 16:01:59 compute-0 nova_compute_init[117537]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 09 16:01:59 compute-0 nova_compute_init[117537]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 09 16:01:59 compute-0 nova_compute_init[117537]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 09 16:01:59 compute-0 nova_compute_init[117537]: INFO:nova_statedir:Nova statedir ownership complete
Oct 09 16:01:59 compute-0 systemd[1]: libpod-9bcd9f4459a13a02afd9a7b9e923231faae7c9aa70a3053016bee2cac7e1c7ae.scope: Deactivated successfully.
Oct 09 16:01:59 compute-0 podman[117538]: 2025-10-09 16:01:59.174491201 +0000 UTC m=+0.032903904 container died 9bcd9f4459a13a02afd9a7b9e923231faae7c9aa70a3053016bee2cac7e1c7ae (image=38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute_init, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_data={'image': '38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007)
Oct 09 16:01:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9bcd9f4459a13a02afd9a7b9e923231faae7c9aa70a3053016bee2cac7e1c7ae-userdata-shm.mount: Deactivated successfully.
Oct 09 16:01:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f775ef9caa6e8bab190f33f4f315509d53a1ba6aab5aea9df39f155463c359b-merged.mount: Deactivated successfully.
Oct 09 16:01:59 compute-0 podman[117548]: 2025-10-09 16:01:59.233521831 +0000 UTC m=+0.064044080 container cleanup 9bcd9f4459a13a02afd9a7b9e923231faae7c9aa70a3053016bee2cac7e1c7ae (image=38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute_init, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': '38.102.83.66:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 09 16:01:59 compute-0 systemd[1]: libpod-conmon-9bcd9f4459a13a02afd9a7b9e923231faae7c9aa70a3053016bee2cac7e1c7ae.scope: Deactivated successfully.
Oct 09 16:01:59 compute-0 sudo[117492]: pam_unix(sudo:session): session closed for user root
Oct 09 16:01:59 compute-0 sshd-session[83030]: Connection closed by 192.168.122.30 port 54682
Oct 09 16:01:59 compute-0 sshd-session[83027]: pam_unix(sshd:session): session closed for user zuul
Oct 09 16:01:59 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Oct 09 16:01:59 compute-0 systemd[1]: session-8.scope: Consumed 2min 10.437s CPU time.
Oct 09 16:01:59 compute-0 systemd-logind[841]: Session 8 logged out. Waiting for processes to exit.
Oct 09 16:01:59 compute-0 systemd-logind[841]: Removed session 8.
Oct 09 16:02:00 compute-0 nova_compute[117331]: 2025-10-09 16:02:00.138 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.12/site-packages/os_vif/__init__.py:44
Oct 09 16:02:00 compute-0 nova_compute[117331]: 2025-10-09 16:02:00.138 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.12/site-packages/os_vif/__init__.py:44
Oct 09 16:02:00 compute-0 nova_compute[117331]: 2025-10-09 16:02:00.139 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.12/site-packages/os_vif/__init__.py:44
Oct 09 16:02:00 compute-0 nova_compute[117331]: 2025-10-09 16:02:00.139 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 09 16:02:00 compute-0 nova_compute[117331]: 2025-10-09 16:02:00.260 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:02:00 compute-0 nova_compute[117331]: 2025-10-09 16:02:00.272 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.013s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:02:00 compute-0 nova_compute[117331]: 2025-10-09 16:02:00.305 2 INFO oslo_service.periodic_task [-] Skipping periodic task _heal_instance_info_cache because its interval is negative
Oct 09 16:02:00 compute-0 nova_compute[117331]: 2025-10-09 16:02:00.306 2 WARNING oslo_config.cfg [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] Deprecated: Option "heartbeat_in_pthread" from group "oslo_messaging_rabbit" is deprecated for removal (The option is related to Eventlet which will be removed. In addition this has never worked as expected with services using eventlet for core service framework.).  Its value may be silently ignored in the future.
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.244 2 INFO nova.virt.driver [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.347 2 INFO nova.compute.provider_config [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.854 2 DEBUG oslo_concurrency.lockutils [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.854 2 DEBUG oslo_concurrency.lockutils [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.855 2 DEBUG oslo_concurrency.lockutils [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.855 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/service.py:274
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.855 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.855 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.856 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.856 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.856 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.856 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.857 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.857 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.857 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.857 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.857 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.857 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cell_worker_thread_pool_size   = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.858 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.858 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.858 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.858 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.858 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.859 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.859 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.859 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.859 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.859 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.859 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.860 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.860 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.860 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.860 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.860 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] default_green_pool_size        = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.860 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.861 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.861 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] default_thread_pool_size       = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.861 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.861 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.861 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] fatal_deprecations             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.861 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.862 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.862 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.862 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.862 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] heal_instance_info_cache_interval = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.862 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.863 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.863 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.863 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.863 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] injected_network_template      = /usr/lib/python3.12/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.864 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.864 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.864 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.864 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.864 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.864 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.865 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.865 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.865 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.865 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] key                            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.865 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.865 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.866 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.866 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.866 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.866 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.866 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.866 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.867 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.867 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.867 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.867 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.867 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.867 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.868 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.868 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.868 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.868 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.868 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.868 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.869 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.869 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.869 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.869 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.869 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.869 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.870 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.870 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] my_shared_fs_storage_ip        = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.870 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.870 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.871 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.871 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.871 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.871 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.871 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.872 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.872 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.872 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] pybasedir                      = /usr/lib/python3.12/site-packages log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.872 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.872 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.872 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.873 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.873 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.873 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.873 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] record                         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.873 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.873 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.874 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.874 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.874 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.874 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.874 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.875 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.875 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.875 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.875 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.875 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.875 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.876 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.876 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.876 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.876 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.877 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.877 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.877 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.877 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.878 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.878 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.878 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.878 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.878 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.878 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.879 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.879 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.879 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.879 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] thread_pool_statistic_period   = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.879 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.880 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.880 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.880 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.880 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.881 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.881 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.881 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.881 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.881 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.882 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.882 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.882 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.882 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.882 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.883 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.883 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.883 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.883 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] os_brick.lock_path             = /var/lib/nova/tmp log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.883 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.883 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.884 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.884 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.884 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.884 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.885 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.885 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.885 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.885 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.885 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.885 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.886 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.886 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.886 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.886 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.886 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.887 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.887 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.887 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.neutron_default_project_id = default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.887 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.response_validation        = warn log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.887 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.887 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.888 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.888 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.888 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.888 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.888 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.889 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.889 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.889 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.889 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.backend_expiration_time  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.889 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.889 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.890 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.890 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.890 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.890 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.891 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.enforce_fips_mode        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.891 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.891 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.891 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.891 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.892 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.memcache_password        = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.892 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.892 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.892 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.892 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.893 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.893 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.893 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.893 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.memcache_username        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.893 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.894 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.redis_db                 = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.894 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.redis_password           = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.894 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.redis_sentinel_service_name = mymaster log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.894 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.redis_sentinels          = ['localhost:26379'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.894 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.redis_server             = localhost:6379 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.894 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.redis_socket_timeout     = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.895 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.redis_username           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.895 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.895 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.895 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.895 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.895 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.896 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.896 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.896 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.896 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.896 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.897 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.897 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.897 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.897 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.897 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.897 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.898 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.898 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.898 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.898 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.898 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.898 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.899 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.899 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.899 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.899 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.899 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.900 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.900 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.900 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.900 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.901 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.901 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.901 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.901 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.901 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] compute.sharing_providers_max_uuids_per_request = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.901 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.902 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.902 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.902 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.902 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.903 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.903 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] consoleauth.enforce_session_timeout = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.903 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.903 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.903 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.903 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.904 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.904 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.904 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.904 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.904 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.905 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.905 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.905 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.905 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.retriable_status_codes  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.905 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.905 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.906 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.906 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.906 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.906 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.906 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.906 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.907 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.asyncio_connection    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.907 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.asyncio_slave_connection = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.907 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.907 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.908 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.908 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.908 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.908 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.908 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.909 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.909 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.909 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.909 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.910 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.910 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.910 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.910 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.911 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.911 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.911 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.911 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.911 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.asyncio_connection = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.912 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.asyncio_slave_connection = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.912 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.912 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.912 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.913 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.913 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.913 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.913 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.914 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.914 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.914 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.914 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.915 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.915 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.915 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.915 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.916 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.916 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.916 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.916 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.916 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.917 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.917 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ephemeral_storage_encryption.default_format = luks log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.917 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.917 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.918 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.918 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.918 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.918 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.918 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.918 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.919 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.919 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.919 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.919 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.921 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.921 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.921 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.921 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.922 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.922 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.922 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.922 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.922 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.923 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.923 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.923 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.retriable_status_codes  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.923 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.923 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.924 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.924 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.924 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.924 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.924 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.924 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.924 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.925 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.925 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.auth_section            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.925 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.auth_type               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.925 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.925 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.925 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.925 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.connect_retries         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.925 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.connect_retry_delay     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.926 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.endpoint_override       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.926 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.926 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.926 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.max_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.926 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.min_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.926 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.region_name             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.926 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.retriable_status_codes  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.926 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.service_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.926 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.service_type            = shared-file-system log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.927 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.share_apply_policy_timeout = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.927 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.927 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.status_code_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.927 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.927 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.927 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.927 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] manila.version                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.928 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.928 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.928 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.928 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.929 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.929 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.929 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.929 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.929 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.929 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.929 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.929 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.929 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.930 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.930 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.930 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.conductor_group         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.930 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.930 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.930 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.930 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.930 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.930 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.931 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.931 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.931 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.931 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.retriable_status_codes  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.931 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.931 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.931 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.931 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.shard                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.932 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.932 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.932 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.932 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.932 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.932 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.932 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.932 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.933 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.933 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.933 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.933 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.933 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.933 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.933 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.933 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.933 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.934 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.934 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.934 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.934 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.934 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.934 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.934 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.934 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.935 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.935 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.935 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.935 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.935 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.935 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.935 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.935 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.935 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.936 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vault.approle_role_id          = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.936 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vault.approle_secret_id        = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.936 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.936 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vault.kv_path                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.936 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.936 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.936 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vault.root_token_id            = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.936 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.937 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vault.timeout                  = 60.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.937 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.937 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.937 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.937 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.937 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.937 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.937 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.938 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.938 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.938 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.938 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.938 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.938 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.938 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.retriable_status_codes = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.939 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.939 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.939 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.939 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.939 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.939 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.939 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.939 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.940 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.ceph_mount_options     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.940 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.ceph_mount_point_base  = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.940 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.940 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.940 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.940 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.940 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.941 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.941 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.941 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.941 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.941 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.941 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.942 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.942 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.942 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.942 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.942 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.942 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.942 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.942 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.943 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.943 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.943 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.943 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.943 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.943 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.943 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.944 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.944 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.944 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.944 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.944 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.944 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.944 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.944 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.945 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.945 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.945 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.945 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.945 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.945 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.946 2 WARNING oslo_config.cfg [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 09 16:02:01 compute-0 nova_compute[117331]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 09 16:02:01 compute-0 nova_compute[117331]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 09 16:02:01 compute-0 nova_compute[117331]: and ``live_migration_inbound_addr`` respectively.
Oct 09 16:02:01 compute-0 nova_compute[117331]: ).  Its value may be silently ignored in the future.
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.946 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.946 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.946 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.946 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.946 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.migration_inbound_addr = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.946 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.947 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.947 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.947 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.947 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.947 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.947 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.947 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.947 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.948 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.948 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.948 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.948 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.948 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.948 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.948 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.948 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.949 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.949 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.949 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.949 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.949 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.949 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.949 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.949 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.950 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.950 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.950 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.950 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.950 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.950 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.950 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.950 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.951 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.tb_cache_size          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.951 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.951 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.951 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.951 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.951 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.951 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.951 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.volume_enforce_multipath = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.952 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.952 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.952 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.952 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.952 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.952 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.952 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.952 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.952 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.953 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.953 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.953 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.953 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.953 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.953 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.954 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.954 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.954 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.954 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.954 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.954 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.954 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.954 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.955 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.955 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.955 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.955 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.955 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.955 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.retriable_status_codes = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.955 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.955 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.955 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.956 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.956 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.956 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.956 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.956 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.956 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.956 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.956 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.956 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] notifications.include_share_mapping = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.957 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] notifications.notification_format = both log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.957 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] notifications.notify_on_state_change = vm_and_task_state log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.957 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.957 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.957 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.957 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.957 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.958 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.958 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.958 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.958 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.958 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.958 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.958 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.958 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.959 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.959 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.959 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.959 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.959 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.959 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.959 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.959 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.960 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.960 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.960 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.960 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.960 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.960 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.960 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.retriable_status_codes = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.961 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.961 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.961 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.961 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.961 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.961 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.961 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.962 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.962 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.962 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.962 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.962 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.962 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.962 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.962 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.963 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.963 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.963 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.963 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.963 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.963 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.963 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.963 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.964 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.964 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.964 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.964 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.964 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] quota.unified_limits_resource_list = ['servers'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.964 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] quota.unified_limits_resource_strategy = require log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.964 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.964 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.964 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.965 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.965 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.965 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.965 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.965 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.965 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.965 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.965 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.966 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.966 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.966 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.966 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.966 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.966 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.966 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.966 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.967 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.hypervisor_version_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.967 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.967 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.image_props_weight_multiplier = 0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.967 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.image_props_weight_setting = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.967 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.967 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.967 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.968 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.968 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.968 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.num_instances_weight_multiplier = 0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.968 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.968 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.968 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.968 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.969 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.969 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.969 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.969 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.969 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.969 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.969 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.970 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.970 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.970 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.970 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.970 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.971 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.971 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.971 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.971 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.971 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.971 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.971 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.971 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.972 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.972 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.972 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.972 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.972 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.972 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.972 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.972 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.973 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.973 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.973 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.973 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.973 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.973 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] spice.require_secure           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.973 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.973 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.974 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] spice.spice_direct_proxy_base_url = http://127.0.0.1:13002/nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.974 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.974 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.974 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.974 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.975 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.975 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.975 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.975 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.975 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.975 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.975 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.975 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.975 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.976 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.976 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.976 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.976 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.976 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.976 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.976 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.976 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.977 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.977 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.977 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.977 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.977 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.977 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.977 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.977 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.977 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.978 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.978 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.978 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.978 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.978 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.978 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.978 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.978 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.979 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.979 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.979 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.979 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.979 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.979 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.979 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.980 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.980 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.980 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.980 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.980 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.980 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.980 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.980 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.981 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.981 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.981 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.981 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.981 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.981 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.981 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.981 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.982 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.982 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.982 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.982 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.982 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.982 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.982 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.982 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.982 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.983 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.983 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.983 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.983 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.983 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.983 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.983 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.983 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.984 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.984 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.984 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.984 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.984 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.984 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.984 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.984 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.985 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.985 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.985 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.hostname = compute-0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.985 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.985 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.985 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.985 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.985 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_splay = 0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.986 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.processname = nova-compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.986 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.986 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.986 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.986 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.986 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.986 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.986 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.986 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.987 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.987 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.987 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.rabbit_stream_fanout = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.987 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.987 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.rabbit_transient_quorum_queue = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.987 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.987 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.987 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.988 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.988 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.988 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.988 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.988 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_rabbit.use_queue_manager = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.988 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_notifications.driver = ['messagingv2'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.988 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.988 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.989 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.989 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.989 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.989 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.989 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.989 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.989 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.990 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.990 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.990 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.990 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.990 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.990 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.990 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.990 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.endpoint_interface  = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.990 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.991 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.endpoint_region_name = regionOne log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.991 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.endpoint_service_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.991 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.endpoint_service_type = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.991 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.991 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.991 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.991 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.991 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.992 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.992 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.992 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.992 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.992 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.992 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.retriable_status_codes = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.992 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.992 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.992 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.993 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.993 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.993 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.993 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.993 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.993 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.994 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.994 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.994 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.994 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.994 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.994 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.994 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.995 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.995 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.995 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.995 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.995 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vif_plug_linux_bridge_privileged.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.995 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.995 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.996 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.996 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.996 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.996 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.996 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vif_plug_ovs_privileged.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.996 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.996 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.997 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.997 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.997 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.997 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.997 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.997 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.997 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.997 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.998 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.998 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] os_vif_ovs.default_qos_type    = linux-noop log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.998 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.998 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.998 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.998 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.998 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.998 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.999 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] privsep_osbrick.capabilities   = [21, 2] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.999 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.999 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.999 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] privsep_osbrick.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.999 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.999 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.999 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:01 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.999 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:02 compute-0 nova_compute[117331]: 2025-10-09 16:02:01.999 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:02 compute-0 nova_compute[117331]: 2025-10-09 16:02:02.000 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:02 compute-0 nova_compute[117331]: 2025-10-09 16:02:02.000 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] nova_sys_admin.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:02 compute-0 nova_compute[117331]: 2025-10-09 16:02:02.000 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:02 compute-0 nova_compute[117331]: 2025-10-09 16:02:02.000 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:02 compute-0 nova_compute[117331]: 2025-10-09 16:02:02.000 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 09 16:02:02 compute-0 nova_compute[117331]: 2025-10-09 16:02:02.000 2 DEBUG oslo_service.backend._eventlet.service [None req-ab701943-9b40-45b6-962a-b912807dc6b0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct 09 16:02:02 compute-0 nova_compute[117331]: 2025-10-09 16:02:02.001 2 INFO nova.service [-] Starting compute node (version 32.1.0-0.20251008143712.076498e.el10)
Oct 09 16:02:02 compute-0 nova_compute[117331]: 2025-10-09 16:02:02.508 2 DEBUG nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:498
Oct 09 16:02:02 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 09 16:02:02 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 09 16:02:02 compute-0 nova_compute[117331]: 2025-10-09 16:02:02.573 2 DEBUG nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f332341e810> _get_new_connection /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:504
Oct 09 16:02:02 compute-0 nova_compute[117331]: libvirt:  error : internal error: could not initialize domain event timer
Oct 09 16:02:02 compute-0 nova_compute[117331]: 2025-10-09 16:02:02.574 2 WARNING nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] URI qemu:///system does not support events: internal error: could not initialize domain event timer: libvirt.libvirtError: internal error: could not initialize domain event timer
Oct 09 16:02:02 compute-0 nova_compute[117331]: 2025-10-09 16:02:02.575 2 DEBUG nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f332341e810> _get_new_connection /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:525
Oct 09 16:02:02 compute-0 nova_compute[117331]: 2025-10-09 16:02:02.576 2 DEBUG nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Starting native event thread _init_events /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:484
Oct 09 16:02:02 compute-0 nova_compute[117331]: 2025-10-09 16:02:02.577 2 DEBUG nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:490
Oct 09 16:02:02 compute-0 nova_compute[117331]: 2025-10-09 16:02:02.577 2 INFO nova.utils [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] The default thread pool MainProcess.default is initialized
Oct 09 16:02:02 compute-0 nova_compute[117331]: 2025-10-09 16:02:02.577 2 DEBUG nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Starting connection event dispatch thread _init_events /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:493
Oct 09 16:02:02 compute-0 nova_compute[117331]: 2025-10-09 16:02:02.578 2 INFO nova.virt.libvirt.driver [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Connection event '1' reason 'None'
Oct 09 16:02:03 compute-0 nova_compute[117331]: 2025-10-09 16:02:03.083 2 WARNING nova.virt.libvirt.driver [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 09 16:02:03 compute-0 nova_compute[117331]: 2025-10-09 16:02:03.084 2 DEBUG nova.virt.libvirt.volume.mount [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 09 16:02:03 compute-0 nova_compute[117331]: 2025-10-09 16:02:03.368 2 INFO nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Libvirt host capabilities <capabilities>
Oct 09 16:02:03 compute-0 nova_compute[117331]: 
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <host>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <uuid>656dbd27-58bc-413d-a3c8-085dadc82fd6</uuid>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <cpu>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <arch>x86_64</arch>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model>EPYC-Rome-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <vendor>AMD</vendor>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <microcode version='16777317'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <signature family='23' model='49' stepping='0'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='x2apic'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='tsc-deadline'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='osxsave'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='hypervisor'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='tsc_adjust'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='spec-ctrl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='stibp'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='arch-capabilities'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='ssbd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='cmp_legacy'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='topoext'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='virt-ssbd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='lbrv'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='tsc-scale'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='vmcb-clean'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='pause-filter'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='pfthreshold'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='svme-addr-chk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='rdctl-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='skip-l1dfl-vmentry'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='mds-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature name='pschange-mc-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <pages unit='KiB' size='4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <pages unit='KiB' size='2048'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <pages unit='KiB' size='1048576'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </cpu>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <power_management>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <suspend_mem/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <suspend_disk/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <suspend_hybrid/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </power_management>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <iommu support='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <migration_features>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <live/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <uri_transports>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <uri_transport>tcp</uri_transport>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <uri_transport>rdma</uri_transport>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </uri_transports>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </migration_features>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <topology>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <cells num='1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <cell id='0'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:           <memory unit='KiB'>7864104</memory>
Oct 09 16:02:03 compute-0 nova_compute[117331]:           <pages unit='KiB' size='4'>1966026</pages>
Oct 09 16:02:03 compute-0 nova_compute[117331]:           <pages unit='KiB' size='2048'>0</pages>
Oct 09 16:02:03 compute-0 nova_compute[117331]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 09 16:02:03 compute-0 nova_compute[117331]:           <distances>
Oct 09 16:02:03 compute-0 nova_compute[117331]:             <sibling id='0' value='10'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:           </distances>
Oct 09 16:02:03 compute-0 nova_compute[117331]:           <cpus num='8'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:           </cpus>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         </cell>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </cells>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </topology>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <cache>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </cache>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <secmodel>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model>selinux</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <doi>0</doi>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </secmodel>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <secmodel>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model>dac</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <doi>0</doi>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </secmodel>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </host>
Oct 09 16:02:03 compute-0 nova_compute[117331]: 
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <guest>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <os_type>hvm</os_type>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <arch name='i686'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <wordsize>32</wordsize>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <domain type='qemu'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <domain type='kvm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </arch>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <features>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <pae/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <nonpae/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <acpi default='on' toggle='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <apic default='on' toggle='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <cpuselection/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <deviceboot/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <disksnapshot default='on' toggle='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <externalSnapshot/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </features>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </guest>
Oct 09 16:02:03 compute-0 nova_compute[117331]: 
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <guest>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <os_type>hvm</os_type>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <arch name='x86_64'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <wordsize>64</wordsize>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <domain type='qemu'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <domain type='kvm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </arch>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <features>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <acpi default='on' toggle='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <apic default='on' toggle='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <cpuselection/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <deviceboot/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <disksnapshot default='on' toggle='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <externalSnapshot/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </features>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </guest>
Oct 09 16:02:03 compute-0 nova_compute[117331]: 
Oct 09 16:02:03 compute-0 nova_compute[117331]: </capabilities>
Oct 09 16:02:03 compute-0 nova_compute[117331]: 
Oct 09 16:02:03 compute-0 nova_compute[117331]: 2025-10-09 16:02:03.374 2 DEBUG nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:944
Oct 09 16:02:03 compute-0 nova_compute[117331]: 2025-10-09 16:02:03.394 2 DEBUG nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 09 16:02:03 compute-0 nova_compute[117331]: <domainCapabilities>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <domain>kvm</domain>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <arch>i686</arch>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <vcpu max='240'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <iothreads supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <os supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <enum name='firmware'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <loader supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='type'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>rom</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>pflash</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='readonly'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>yes</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>no</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='secure'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>no</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </loader>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </os>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <cpu>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <mode name='host-passthrough' supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='hostPassthroughMigratable'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>on</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>off</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </mode>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <mode name='maximum' supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='maximumMigratable'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>on</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>off</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </mode>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <mode name='host-model' supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <vendor>AMD</vendor>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='x2apic'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='hypervisor'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='stibp'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='ssbd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='overflow-recov'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='succor'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='ibrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='amd-ssbd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='lbrv'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='tsc-scale'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='flushbyasid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='pause-filter'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='pfthreshold'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='svme-addr-chk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='rdctl-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='mds-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='gds-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='rfds-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='disable' name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </mode>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <mode name='custom' supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-noTSX'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cooperlake'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cooperlake-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cooperlake-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Denverton'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mpx'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Denverton-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mpx'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Denverton-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Denverton-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Dhyana-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Genoa'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amd-psfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='auto-ibrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='no-nested-data-bp'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='null-sel-clr-base'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='stibp-always-on'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amd-psfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='auto-ibrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='no-nested-data-bp'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='null-sel-clr-base'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='stibp-always-on'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Milan'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Milan-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Milan-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amd-psfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='no-nested-data-bp'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='null-sel-clr-base'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='stibp-always-on'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Rome'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Rome-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Rome-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Rome-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='GraniteRapids'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='prefetchiti'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='GraniteRapids-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='prefetchiti'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='GraniteRapids-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx10'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx10-128'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx10-256'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx10-512'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='prefetchiti'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-noTSX'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v5'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v6'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v7'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='IvyBridge'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='IvyBridge-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='IvyBridge-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='IvyBridge-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='KnightsMill'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-4fmaps'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-4vnniw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512er'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512pf'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='KnightsMill-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-4fmaps'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-4vnniw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512er'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512pf'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Opteron_G4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fma4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xop'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Opteron_G4-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fma4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xop'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Opteron_G5'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fma4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tbm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xop'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Opteron_G5-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fma4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tbm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xop'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SapphireRapids'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SapphireRapids-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SapphireRapids-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SapphireRapids-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SierraForest'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-ne-convert'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cmpccxadd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SierraForest-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-ne-convert'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cmpccxadd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v5'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='core-capability'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mpx'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='split-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='core-capability'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mpx'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='split-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='core-capability'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='split-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='core-capability'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='split-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='athlon'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnow'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnowext'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='athlon-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnow'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnowext'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='core2duo'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='core2duo-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='coreduo'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='coreduo-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='n270'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='n270-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='phenom'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnow'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnowext'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='phenom-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnow'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnowext'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </mode>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <memoryBacking supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <enum name='sourceType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>file</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>anonymous</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>memfd</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </memoryBacking>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <disk supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='diskDevice'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>disk</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>cdrom</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>floppy</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>lun</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='bus'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>ide</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>fdc</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>scsi</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>usb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>sata</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio-transitional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio-non-transitional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <graphics supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='type'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vnc</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>egl-headless</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>dbus</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <video supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='modelType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vga</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>cirrus</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>none</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>bochs</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>ramfb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </video>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <hostdev supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='mode'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>subsystem</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='startupPolicy'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>default</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>mandatory</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>requisite</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>optional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='subsysType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>usb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>pci</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>scsi</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='capsType'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='pciBackend'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </hostdev>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <rng supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio-transitional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio-non-transitional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendModel'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>random</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>egd</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>builtin</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <filesystem supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='driverType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>path</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>handle</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtiofs</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </filesystem>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <tpm supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>tpm-tis</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>tpm-crb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendModel'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>emulator</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>external</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendVersion'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>2.0</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </tpm>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <redirdev supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='bus'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>usb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </redirdev>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <channel supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='type'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>pty</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>unix</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </channel>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <crypto supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='type'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>qemu</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendModel'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>builtin</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </crypto>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <interface supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>default</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>passt</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <panic supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>isa</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>hyperv</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </panic>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <features>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <gic supported='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <vmcoreinfo supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <genid supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <backingStoreInput supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <backup supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <async-teardown supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <ps2 supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <sev supported='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <sgx supported='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <hyperv supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='features'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>relaxed</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vapic</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>spinlocks</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vpindex</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>runtime</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>synic</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>stimer</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>reset</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vendor_id</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>frequencies</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>reenlightenment</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>tlbflush</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>ipi</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>avic</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>emsr_bitmap</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>xmm_input</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </hyperv>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <launchSecurity supported='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </features>
Oct 09 16:02:03 compute-0 nova_compute[117331]: </domainCapabilities>
Oct 09 16:02:03 compute-0 nova_compute[117331]:  _get_domain_capabilities /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1029
Oct 09 16:02:03 compute-0 nova_compute[117331]: 2025-10-09 16:02:03.400 2 DEBUG nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 09 16:02:03 compute-0 nova_compute[117331]: <domainCapabilities>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <domain>kvm</domain>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <arch>i686</arch>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <vcpu max='4096'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <iothreads supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <os supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <enum name='firmware'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <loader supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='type'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>rom</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>pflash</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='readonly'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>yes</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>no</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='secure'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>no</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </loader>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </os>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <cpu>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <mode name='host-passthrough' supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='hostPassthroughMigratable'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>on</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>off</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </mode>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <mode name='maximum' supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='maximumMigratable'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>on</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>off</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </mode>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <mode name='host-model' supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <vendor>AMD</vendor>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='x2apic'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='hypervisor'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='stibp'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='ssbd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='overflow-recov'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='succor'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='ibrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='amd-ssbd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='lbrv'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='tsc-scale'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='flushbyasid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='pause-filter'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='pfthreshold'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='svme-addr-chk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='rdctl-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='mds-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='gds-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='rfds-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='disable' name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </mode>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <mode name='custom' supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-noTSX'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cooperlake'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cooperlake-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cooperlake-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Denverton'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mpx'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Denverton-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mpx'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Denverton-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Denverton-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Dhyana-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Genoa'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amd-psfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='auto-ibrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='no-nested-data-bp'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='null-sel-clr-base'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='stibp-always-on'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amd-psfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='auto-ibrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='no-nested-data-bp'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='null-sel-clr-base'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='stibp-always-on'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Milan'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Milan-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Milan-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amd-psfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='no-nested-data-bp'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='null-sel-clr-base'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='stibp-always-on'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Rome'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Rome-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Rome-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Rome-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='GraniteRapids'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='prefetchiti'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='GraniteRapids-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='prefetchiti'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='GraniteRapids-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx10'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx10-128'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx10-256'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx10-512'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='prefetchiti'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-noTSX'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v5'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v6'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v7'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='IvyBridge'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='IvyBridge-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='IvyBridge-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='IvyBridge-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='KnightsMill'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-4fmaps'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-4vnniw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512er'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512pf'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='KnightsMill-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-4fmaps'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-4vnniw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512er'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512pf'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Opteron_G4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fma4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xop'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Opteron_G4-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fma4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xop'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Opteron_G5'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fma4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tbm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xop'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Opteron_G5-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fma4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tbm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xop'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SapphireRapids'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SapphireRapids-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SapphireRapids-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SapphireRapids-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SierraForest'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-ne-convert'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cmpccxadd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SierraForest-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-ne-convert'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cmpccxadd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v5'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='core-capability'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mpx'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='split-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='core-capability'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mpx'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='split-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='core-capability'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='split-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='core-capability'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='split-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='athlon'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnow'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnowext'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='athlon-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnow'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnowext'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='core2duo'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='core2duo-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='coreduo'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='coreduo-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='n270'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='n270-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='phenom'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnow'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnowext'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='phenom-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnow'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnowext'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </mode>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <memoryBacking supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <enum name='sourceType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>file</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>anonymous</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>memfd</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </memoryBacking>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <disk supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='diskDevice'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>disk</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>cdrom</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>floppy</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>lun</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='bus'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>fdc</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>scsi</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>usb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>sata</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio-transitional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio-non-transitional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <graphics supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='type'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vnc</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>egl-headless</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>dbus</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <video supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='modelType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vga</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>cirrus</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>none</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>bochs</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>ramfb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </video>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <hostdev supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='mode'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>subsystem</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='startupPolicy'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>default</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>mandatory</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>requisite</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>optional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='subsysType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>usb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>pci</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>scsi</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='capsType'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='pciBackend'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </hostdev>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <rng supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio-transitional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio-non-transitional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendModel'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>random</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>egd</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>builtin</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <filesystem supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='driverType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>path</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>handle</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtiofs</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </filesystem>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <tpm supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>tpm-tis</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>tpm-crb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendModel'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>emulator</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>external</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendVersion'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>2.0</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </tpm>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <redirdev supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='bus'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>usb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </redirdev>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <channel supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='type'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>pty</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>unix</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </channel>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <crypto supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='type'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>qemu</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendModel'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>builtin</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </crypto>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <interface supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>default</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>passt</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <panic supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>isa</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>hyperv</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </panic>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <features>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <gic supported='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <vmcoreinfo supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <genid supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <backingStoreInput supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <backup supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <async-teardown supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <ps2 supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <sev supported='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <sgx supported='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <hyperv supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='features'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>relaxed</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vapic</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>spinlocks</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vpindex</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>runtime</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>synic</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>stimer</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>reset</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vendor_id</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>frequencies</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>reenlightenment</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>tlbflush</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>ipi</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>avic</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>emsr_bitmap</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>xmm_input</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </hyperv>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <launchSecurity supported='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </features>
Oct 09 16:02:03 compute-0 nova_compute[117331]: </domainCapabilities>
Oct 09 16:02:03 compute-0 nova_compute[117331]:  _get_domain_capabilities /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1029
Oct 09 16:02:03 compute-0 nova_compute[117331]: 2025-10-09 16:02:03.456 2 DEBUG nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:944
Oct 09 16:02:03 compute-0 nova_compute[117331]: 2025-10-09 16:02:03.462 2 DEBUG nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 09 16:02:03 compute-0 nova_compute[117331]: <domainCapabilities>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <domain>kvm</domain>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <arch>x86_64</arch>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <vcpu max='240'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <iothreads supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <os supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <enum name='firmware'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <loader supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='type'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>rom</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>pflash</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='readonly'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>yes</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>no</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='secure'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>no</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </loader>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </os>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <cpu>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <mode name='host-passthrough' supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='hostPassthroughMigratable'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>on</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>off</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </mode>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <mode name='maximum' supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='maximumMigratable'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>on</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>off</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </mode>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <mode name='host-model' supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <vendor>AMD</vendor>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='x2apic'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='hypervisor'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='stibp'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='ssbd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='overflow-recov'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='succor'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='ibrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='amd-ssbd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='lbrv'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='tsc-scale'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='flushbyasid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='pause-filter'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='pfthreshold'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='svme-addr-chk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='rdctl-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='mds-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='gds-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='rfds-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='disable' name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </mode>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <mode name='custom' supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-noTSX'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cooperlake'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cooperlake-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cooperlake-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Denverton'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mpx'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Denverton-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mpx'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Denverton-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Denverton-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Dhyana-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Genoa'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amd-psfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='auto-ibrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='no-nested-data-bp'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='null-sel-clr-base'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='stibp-always-on'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amd-psfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='auto-ibrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='no-nested-data-bp'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='null-sel-clr-base'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='stibp-always-on'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Milan'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Milan-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Milan-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amd-psfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='no-nested-data-bp'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='null-sel-clr-base'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='stibp-always-on'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Rome'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Rome-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Rome-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Rome-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='GraniteRapids'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='prefetchiti'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='GraniteRapids-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='prefetchiti'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='GraniteRapids-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx10'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx10-128'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx10-256'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx10-512'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='prefetchiti'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-noTSX'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v5'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v6'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v7'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='IvyBridge'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='IvyBridge-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='IvyBridge-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='IvyBridge-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='KnightsMill'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-4fmaps'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-4vnniw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512er'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512pf'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='KnightsMill-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-4fmaps'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-4vnniw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512er'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512pf'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Opteron_G4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fma4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xop'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Opteron_G4-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fma4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xop'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Opteron_G5'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fma4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tbm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xop'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Opteron_G5-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fma4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tbm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xop'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SapphireRapids'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SapphireRapids-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SapphireRapids-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SapphireRapids-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SierraForest'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-ne-convert'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cmpccxadd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SierraForest-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-ne-convert'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cmpccxadd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v5'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='core-capability'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mpx'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='split-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='core-capability'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mpx'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='split-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='core-capability'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='split-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='core-capability'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='split-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='athlon'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnow'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnowext'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='athlon-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnow'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnowext'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='core2duo'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='core2duo-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='coreduo'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='coreduo-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='n270'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='n270-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='phenom'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnow'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnowext'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='phenom-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnow'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnowext'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </mode>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <memoryBacking supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <enum name='sourceType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>file</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>anonymous</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>memfd</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </memoryBacking>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <disk supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='diskDevice'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>disk</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>cdrom</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>floppy</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>lun</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='bus'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>ide</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>fdc</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>scsi</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>usb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>sata</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio-transitional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio-non-transitional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <graphics supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='type'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vnc</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>egl-headless</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>dbus</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <video supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='modelType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vga</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>cirrus</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>none</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>bochs</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>ramfb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </video>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <hostdev supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='mode'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>subsystem</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='startupPolicy'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>default</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>mandatory</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>requisite</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>optional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='subsysType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>usb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>pci</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>scsi</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='capsType'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='pciBackend'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </hostdev>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <rng supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio-transitional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio-non-transitional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendModel'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>random</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>egd</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>builtin</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <filesystem supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='driverType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>path</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>handle</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtiofs</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </filesystem>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <tpm supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>tpm-tis</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>tpm-crb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendModel'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>emulator</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>external</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendVersion'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>2.0</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </tpm>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <redirdev supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='bus'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>usb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </redirdev>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <channel supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='type'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>pty</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>unix</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </channel>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <crypto supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='type'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>qemu</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendModel'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>builtin</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </crypto>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <interface supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>default</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>passt</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <panic supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>isa</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>hyperv</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </panic>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <features>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <gic supported='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <vmcoreinfo supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <genid supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <backingStoreInput supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <backup supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <async-teardown supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <ps2 supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <sev supported='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <sgx supported='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <hyperv supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='features'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>relaxed</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vapic</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>spinlocks</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vpindex</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>runtime</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>synic</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>stimer</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>reset</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vendor_id</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>frequencies</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>reenlightenment</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>tlbflush</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>ipi</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>avic</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>emsr_bitmap</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>xmm_input</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </hyperv>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <launchSecurity supported='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </features>
Oct 09 16:02:03 compute-0 nova_compute[117331]: </domainCapabilities>
Oct 09 16:02:03 compute-0 nova_compute[117331]:  _get_domain_capabilities /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1029
Oct 09 16:02:03 compute-0 nova_compute[117331]: 2025-10-09 16:02:03.527 2 DEBUG nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 09 16:02:03 compute-0 nova_compute[117331]: <domainCapabilities>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <domain>kvm</domain>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <arch>x86_64</arch>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <vcpu max='4096'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <iothreads supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <os supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <enum name='firmware'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>efi</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <loader supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='type'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>rom</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>pflash</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='readonly'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>yes</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>no</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='secure'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>yes</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>no</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </loader>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </os>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <cpu>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <mode name='host-passthrough' supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='hostPassthroughMigratable'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>on</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>off</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </mode>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <mode name='maximum' supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='maximumMigratable'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>on</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>off</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </mode>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <mode name='host-model' supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <vendor>AMD</vendor>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='x2apic'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='hypervisor'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='stibp'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='ssbd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='overflow-recov'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='succor'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='ibrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='amd-ssbd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='lbrv'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='tsc-scale'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='flushbyasid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='pause-filter'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='pfthreshold'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='svme-addr-chk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='rdctl-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='mds-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='gds-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='require' name='rfds-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <feature policy='disable' name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </mode>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <mode name='custom' supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-noTSX'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Broadwell-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cooperlake'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cooperlake-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Cooperlake-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Denverton'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mpx'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Denverton-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mpx'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Denverton-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Denverton-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Dhyana-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Genoa'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amd-psfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='auto-ibrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='no-nested-data-bp'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='null-sel-clr-base'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='stibp-always-on'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amd-psfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='auto-ibrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='no-nested-data-bp'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='null-sel-clr-base'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='stibp-always-on'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Milan'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Milan-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Milan-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amd-psfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='no-nested-data-bp'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='null-sel-clr-base'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='stibp-always-on'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Rome'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Rome-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Rome-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-Rome-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='EPYC-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='GraniteRapids'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='prefetchiti'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='GraniteRapids-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='prefetchiti'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='GraniteRapids-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx10'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx10-128'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx10-256'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx10-512'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='prefetchiti'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-noTSX'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Haswell-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v5'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v6'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Icelake-Server-v7'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='IvyBridge'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='IvyBridge-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='IvyBridge-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='IvyBridge-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='KnightsMill'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-4fmaps'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-4vnniw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512er'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512pf'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='KnightsMill-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-4fmaps'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-4vnniw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512er'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512pf'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Opteron_G4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fma4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xop'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Opteron_G4-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fma4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xop'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Opteron_G5'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fma4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tbm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xop'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Opteron_G5-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fma4'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tbm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xop'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SapphireRapids'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SapphireRapids-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SapphireRapids-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SapphireRapids-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='amx-tile'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-bf16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-fp16'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512-vpopcntdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bitalg'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vbmi2'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrc'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fzrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='la57'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='taa-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='tsx-ldtrk'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xfd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SierraForest'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-ne-convert'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cmpccxadd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='SierraForest-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-ifma'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-ne-convert'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx-vnni-int8'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='bus-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cmpccxadd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fbsdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='fsrs'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ibrs-all'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mcdt-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pbrsb-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='psdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='sbdr-ssdp-no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='serialize'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vaes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='vpclmulqdq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Client-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='hle'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='rtm'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Skylake-Server-v5'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512bw'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512cd'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512dq'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512f'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='avx512vl'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='invpcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pcid'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='pku'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='core-capability'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mpx'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='split-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='core-capability'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='mpx'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='split-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge-v2'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='core-capability'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='split-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge-v3'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='core-capability'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='split-lock-detect'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='Snowridge-v4'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='cldemote'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='erms'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='gfni'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdir64b'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='movdiri'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='xsaves'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='athlon'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnow'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnowext'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='athlon-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnow'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnowext'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='core2duo'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='core2duo-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='coreduo'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='coreduo-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='n270'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='n270-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='ss'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='phenom'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnow'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnowext'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <blockers model='phenom-v1'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnow'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <feature name='3dnowext'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </blockers>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </mode>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <memoryBacking supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <enum name='sourceType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>file</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>anonymous</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <value>memfd</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </memoryBacking>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <disk supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='diskDevice'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>disk</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>cdrom</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>floppy</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>lun</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='bus'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>fdc</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>scsi</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>usb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>sata</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio-transitional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio-non-transitional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <graphics supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='type'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vnc</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>egl-headless</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>dbus</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <video supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='modelType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vga</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>cirrus</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>none</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>bochs</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>ramfb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </video>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <hostdev supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='mode'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>subsystem</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='startupPolicy'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>default</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>mandatory</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>requisite</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>optional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='subsysType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>usb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>pci</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>scsi</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='capsType'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='pciBackend'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </hostdev>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <rng supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio-transitional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtio-non-transitional</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendModel'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>random</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>egd</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>builtin</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <filesystem supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='driverType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>path</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>handle</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>virtiofs</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </filesystem>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <tpm supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>tpm-tis</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>tpm-crb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendModel'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>emulator</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>external</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendVersion'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>2.0</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </tpm>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <redirdev supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='bus'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>usb</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </redirdev>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <channel supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='type'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>pty</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>unix</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </channel>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <crypto supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='type'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>qemu</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendModel'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>builtin</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </crypto>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <interface supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='backendType'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>default</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>passt</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <panic supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='model'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>isa</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>hyperv</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </panic>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   <features>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <gic supported='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <vmcoreinfo supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <genid supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <backingStoreInput supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <backup supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <async-teardown supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <ps2 supported='yes'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <sev supported='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <sgx supported='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <hyperv supported='yes'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       <enum name='features'>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>relaxed</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vapic</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>spinlocks</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vpindex</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>runtime</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>synic</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>stimer</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>reset</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>vendor_id</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>frequencies</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>reenlightenment</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>tlbflush</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>ipi</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>avic</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>emsr_bitmap</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:         <value>xmm_input</value>
Oct 09 16:02:03 compute-0 nova_compute[117331]:       </enum>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     </hyperv>
Oct 09 16:02:03 compute-0 nova_compute[117331]:     <launchSecurity supported='no'/>
Oct 09 16:02:03 compute-0 nova_compute[117331]:   </features>
Oct 09 16:02:03 compute-0 nova_compute[117331]: </domainCapabilities>
Oct 09 16:02:03 compute-0 nova_compute[117331]:  _get_domain_capabilities /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1029
Oct 09 16:02:03 compute-0 nova_compute[117331]: 2025-10-09 16:02:03.583 2 DEBUG nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1877
Oct 09 16:02:03 compute-0 nova_compute[117331]: 2025-10-09 16:02:03.583 2 DEBUG nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1877
Oct 09 16:02:03 compute-0 nova_compute[117331]: 2025-10-09 16:02:03.584 2 DEBUG nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1877
Oct 09 16:02:03 compute-0 nova_compute[117331]: 2025-10-09 16:02:03.584 2 INFO nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Secure Boot support detected
Oct 09 16:02:03 compute-0 nova_compute[117331]: 2025-10-09 16:02:03.593 2 INFO nova.virt.libvirt.driver [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 09 16:02:03 compute-0 nova_compute[117331]: 2025-10-09 16:02:03.593 2 INFO nova.virt.libvirt.driver [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 09 16:02:03 compute-0 nova_compute[117331]: 2025-10-09 16:02:03.754 2 DEBUG nova.virt.libvirt.driver [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1177
Oct 09 16:02:04 compute-0 nova_compute[117331]: 2025-10-09 16:02:04.266 2 INFO nova.virt.node [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Determined node identity 593051b8-2000-437f-a915-2616fc8b1671 from /var/lib/nova/compute_id
Oct 09 16:02:04 compute-0 nova_compute[117331]: 2025-10-09 16:02:04.775 2 WARNING nova.compute.manager [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Compute nodes ['593051b8-2000-437f-a915-2616fc8b1671'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 09 16:02:05 compute-0 sshd-session[117695]: Accepted publickey for zuul from 192.168.122.30 port 60592 ssh2: ECDSA SHA256:2Vdz7kVNDZnmAnEBdeIC9De7MGoQwU7bxSCyJABiYXo
Oct 09 16:02:05 compute-0 systemd-logind[841]: New session 11 of user zuul.
Oct 09 16:02:05 compute-0 systemd[1]: Started Session 11 of User zuul.
Oct 09 16:02:05 compute-0 sshd-session[117695]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 16:02:05 compute-0 nova_compute[117331]: 2025-10-09 16:02:05.786 2 INFO nova.compute.manager [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 09 16:02:06 compute-0 python3.9[117848]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 16:02:06 compute-0 nova_compute[117331]: 2025-10-09 16:02:06.800 2 WARNING nova.compute.manager [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 09 16:02:06 compute-0 nova_compute[117331]: 2025-10-09 16:02:06.800 2 DEBUG oslo_concurrency.lockutils [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:02:06 compute-0 nova_compute[117331]: 2025-10-09 16:02:06.800 2 DEBUG oslo_concurrency.lockutils [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:02:06 compute-0 nova_compute[117331]: 2025-10-09 16:02:06.801 2 DEBUG oslo_concurrency.lockutils [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:02:06 compute-0 nova_compute[117331]: 2025-10-09 16:02:06.801 2 DEBUG nova.compute.resource_tracker [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:02:06 compute-0 nova_compute[117331]: 2025-10-09 16:02:06.928 2 WARNING nova.virt.libvirt.driver [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:02:06 compute-0 nova_compute[117331]: 2025-10-09 16:02:06.929 2 DEBUG oslo_concurrency.processutils [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:02:06 compute-0 nova_compute[117331]: 2025-10-09 16:02:06.946 2 DEBUG oslo_concurrency.processutils [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.017s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:02:06 compute-0 nova_compute[117331]: 2025-10-09 16:02:06.946 2 DEBUG nova.compute.resource_tracker [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6551MB free_disk=73.47844696044922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:02:06 compute-0 nova_compute[117331]: 2025-10-09 16:02:06.947 2 DEBUG oslo_concurrency.lockutils [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:02:06 compute-0 nova_compute[117331]: 2025-10-09 16:02:06.947 2 DEBUG oslo_concurrency.lockutils [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:02:07 compute-0 sudo[118003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jguwxsmrkejajnokdhbcqdegkspvmzqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025726.6588879-52-106074558705669/AnsiballZ_systemd_service.py'
Oct 09 16:02:07 compute-0 sudo[118003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:07 compute-0 nova_compute[117331]: 2025-10-09 16:02:07.453 2 WARNING nova.compute.resource_tracker [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] No compute node record for compute-0.ctlplane.example.com:593051b8-2000-437f-a915-2616fc8b1671: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 593051b8-2000-437f-a915-2616fc8b1671 could not be found.
Oct 09 16:02:07 compute-0 python3.9[118005]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 16:02:07 compute-0 systemd[1]: Reloading.
Oct 09 16:02:07 compute-0 systemd-sysv-generator[118035]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:02:07 compute-0 systemd-rc-local-generator[118032]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:02:07 compute-0 sudo[118003]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:07 compute-0 nova_compute[117331]: 2025-10-09 16:02:07.960 2 INFO nova.compute.resource_tracker [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 593051b8-2000-437f-a915-2616fc8b1671
Oct 09 16:02:08 compute-0 podman[118163]: 2025-10-09 16:02:08.513213946 +0000 UTC m=+0.059457260 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 09 16:02:08 compute-0 python3.9[118204]: ansible-ansible.builtin.service_facts Invoked
Oct 09 16:02:08 compute-0 network[118225]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 09 16:02:08 compute-0 network[118226]: 'network-scripts' will be removed from distribution in near future.
Oct 09 16:02:08 compute-0 network[118227]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 09 16:02:09 compute-0 nova_compute[117331]: 2025-10-09 16:02:09.488 2 DEBUG nova.compute.resource_tracker [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:02:09 compute-0 nova_compute[117331]: 2025-10-09 16:02:09.488 2 DEBUG nova.compute.resource_tracker [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:02:06 up 11 min,  0 user,  load average: 0.79, 0.71, 0.40\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:02:09 compute-0 podman[118244]: 2025-10-09 16:02:09.846448564 +0000 UTC m=+0.053079569 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:02:10 compute-0 nova_compute[117331]: 2025-10-09 16:02:10.309 2 INFO nova.scheduler.client.report [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] [req-dd117a01-5152-49b5-92e3-b9368983f1f9] Created resource provider record via placement API for resource provider with UUID 593051b8-2000-437f-a915-2616fc8b1671 and name compute-0.ctlplane.example.com.
Oct 09 16:02:10 compute-0 nova_compute[117331]: 2025-10-09 16:02:10.353 2 DEBUG nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 09 16:02:10 compute-0 nova_compute[117331]: ] _kernel_supports_amd_sev /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1953
Oct 09 16:02:10 compute-0 nova_compute[117331]: 2025-10-09 16:02:10.353 2 INFO nova.virt.libvirt.host [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] kernel doesn't support AMD SEV
Oct 09 16:02:10 compute-0 nova_compute[117331]: 2025-10-09 16:02:10.353 2 DEBUG nova.compute.provider_tree [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Updating inventory in ProviderTree for provider 593051b8-2000-437f-a915-2616fc8b1671 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 09 16:02:10 compute-0 nova_compute[117331]: 2025-10-09 16:02:10.354 2 DEBUG nova.virt.libvirt.driver [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:02:10 compute-0 nova_compute[117331]: 2025-10-09 16:02:10.891 2 DEBUG nova.scheduler.client.report [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Updated inventory for provider 593051b8-2000-437f-a915-2616fc8b1671 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:975
Oct 09 16:02:10 compute-0 nova_compute[117331]: 2025-10-09 16:02:10.891 2 DEBUG nova.compute.provider_tree [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Updating resource provider 593051b8-2000-437f-a915-2616fc8b1671 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Oct 09 16:02:10 compute-0 nova_compute[117331]: 2025-10-09 16:02:10.891 2 DEBUG nova.compute.provider_tree [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Updating inventory in ProviderTree for provider 593051b8-2000-437f-a915-2616fc8b1671 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 09 16:02:11 compute-0 nova_compute[117331]: 2025-10-09 16:02:11.024 2 DEBUG nova.compute.provider_tree [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Updating resource provider 593051b8-2000-437f-a915-2616fc8b1671 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Oct 09 16:02:11 compute-0 nova_compute[117331]: 2025-10-09 16:02:11.532 2 DEBUG nova.compute.resource_tracker [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:02:11 compute-0 nova_compute[117331]: 2025-10-09 16:02:11.533 2 DEBUG oslo_concurrency.lockutils [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.586s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:02:11 compute-0 nova_compute[117331]: 2025-10-09 16:02:11.533 2 DEBUG nova.service [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Creating RPC server for service compute start /usr/lib/python3.12/site-packages/nova/service.py:177
Oct 09 16:02:11 compute-0 nova_compute[117331]: 2025-10-09 16:02:11.650 2 DEBUG nova.service [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.12/site-packages/nova/service.py:194
Oct 09 16:02:11 compute-0 nova_compute[117331]: 2025-10-09 16:02:11.651 2 DEBUG nova.servicegroup.drivers.db [None req-d385ee3e-f913-4a9a-bb24-31bd7762fadb - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.12/site-packages/nova/servicegroup/drivers/db.py:44
Oct 09 16:02:12 compute-0 auditd[771]: Audit daemon rotating log files
Oct 09 16:02:13 compute-0 sudo[118522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvgasruapqaalvcdcrsogryzjsajesrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025732.8269877-90-97507050340771/AnsiballZ_systemd_service.py'
Oct 09 16:02:13 compute-0 sudo[118522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:13 compute-0 python3.9[118524]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 16:02:13 compute-0 sudo[118522]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:14 compute-0 sudo[118675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kptgahpiexoqhkgtvvccaiagegtzditd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025733.6912951-110-233371888272513/AnsiballZ_file.py'
Oct 09 16:02:14 compute-0 sudo[118675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:14 compute-0 python3.9[118677]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:14 compute-0 sudo[118675]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:14 compute-0 sudo[118827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-addlumxvxwyikzqvwuuljbbmovfzgxeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025734.537819-126-279224614296307/AnsiballZ_file.py'
Oct 09 16:02:14 compute-0 sudo[118827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:15 compute-0 python3.9[118829]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:15 compute-0 sudo[118827]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:15 compute-0 sudo[118990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddqmolvzklggodgbkqwgoilydowjoqdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025735.3766992-144-239104051295158/AnsiballZ_command.py'
Oct 09 16:02:15 compute-0 sudo[118990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:15 compute-0 podman[118953]: 2025-10-09 16:02:15.828060412 +0000 UTC m=+0.081191272 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0)
Oct 09 16:02:15 compute-0 python3.9[119000]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 16:02:15 compute-0 sudo[118990]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:16 compute-0 python3.9[119159]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 09 16:02:17 compute-0 sudo[119309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tarqtcyotgxgocpoairwyricirrsywae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025737.1565852-180-154742685251447/AnsiballZ_systemd_service.py'
Oct 09 16:02:17 compute-0 sudo[119309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:17 compute-0 python3.9[119311]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 16:02:17 compute-0 systemd[1]: Reloading.
Oct 09 16:02:17 compute-0 rsyslogd[1282]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 16:02:17 compute-0 rsyslogd[1282]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 16:02:17 compute-0 systemd-rc-local-generator[119339]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:02:17 compute-0 systemd-sysv-generator[119344]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:02:18 compute-0 sudo[119309]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:18 compute-0 sudo[119498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dblyywwjjwvgjhhtobxkonnjtnxuuufx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025738.2706702-196-148914456425910/AnsiballZ_command.py'
Oct 09 16:02:18 compute-0 sudo[119498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:18 compute-0 python3.9[119500]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 16:02:18 compute-0 sudo[119498]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:19 compute-0 sudo[119651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usyqzwttsbmrxddweqmyacbhmjdosybd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025739.1840694-214-66018453268004/AnsiballZ_file.py'
Oct 09 16:02:19 compute-0 sudo[119651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:19 compute-0 python3.9[119653]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:02:19 compute-0 sudo[119651]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:20 compute-0 python3.9[119803]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 16:02:21 compute-0 python3.9[119955]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:02:21 compute-0 python3.9[120076]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760025740.8003044-246-35768678264384/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:02:22 compute-0 sudo[120226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjhormnqpgjpjmezntemtwwbmprxscip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025742.1689782-276-110969919645265/AnsiballZ_group.py'
Oct 09 16:02:22 compute-0 sudo[120226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:22 compute-0 python3.9[120228]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Oct 09 16:02:22 compute-0 sudo[120226]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:23 compute-0 sudo[120378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwunlxnokqfhdbqdwcuufsgqebbaucys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025743.1804821-298-193826477645120/AnsiballZ_getent.py'
Oct 09 16:02:23 compute-0 sudo[120378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:23 compute-0 python3.9[120380]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Oct 09 16:02:23 compute-0 sudo[120378]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:24 compute-0 sudo[120531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qibkrhyofvqsrkewvhiokvrffhxhmnvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025744.045291-314-213633559709650/AnsiballZ_group.py'
Oct 09 16:02:24 compute-0 sudo[120531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:24 compute-0 python3.9[120533]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 09 16:02:24 compute-0 groupadd[120534]: group added to /etc/group: name=ceilometer, GID=42405
Oct 09 16:02:24 compute-0 groupadd[120534]: group added to /etc/gshadow: name=ceilometer
Oct 09 16:02:24 compute-0 groupadd[120534]: new group: name=ceilometer, GID=42405
Oct 09 16:02:24 compute-0 sudo[120531]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:25 compute-0 sudo[120689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqbuhjpiqsqbwikiimmbpguwizactwtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025745.0343301-330-5581867229271/AnsiballZ_user.py'
Oct 09 16:02:25 compute-0 sudo[120689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:25 compute-0 python3.9[120691]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 09 16:02:25 compute-0 useradd[120693]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Oct 09 16:02:25 compute-0 useradd[120693]: add 'ceilometer' to group 'libvirt'
Oct 09 16:02:25 compute-0 useradd[120693]: add 'ceilometer' to shadow group 'libvirt'
Oct 09 16:02:25 compute-0 sudo[120689]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:26 compute-0 podman[120776]: 2025-10-09 16:02:26.820452429 +0000 UTC m=+0.055955939 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Oct 09 16:02:27 compute-0 python3.9[120869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:02:27 compute-0 python3.9[120990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1760025746.6934292-382-256096499785174/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:28 compute-0 python3.9[121140]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:02:28 compute-0 python3.9[121261]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1760025747.7012963-382-191227396851410/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:29 compute-0 python3.9[121411]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:02:29 compute-0 python3.9[121532]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1760025748.7717898-382-178469615183285/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:30 compute-0 python3.9[121682]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 16:02:31 compute-0 python3.9[121834]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 16:02:32 compute-0 python3.9[121986]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:02:32 compute-0 python3.9[122107]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025751.7716331-500-241809734218629/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:33 compute-0 python3.9[122257]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:02:33 compute-0 python3.9[122333]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:34 compute-0 python3.9[122483]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:02:34 compute-0 python3.9[122604]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025753.973124-500-176590731268930/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=b0f01f0c732568ee1c59a02728aad039889a4d22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:02:35.267 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:02:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:02:35.268 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:02:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:02:35.268 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:02:35 compute-0 python3.9[122755]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:02:35 compute-0 python3.9[122876]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025755.041899-500-81087653768288/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:36 compute-0 python3.9[123026]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:02:36 compute-0 python3.9[123147]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025756.0410109-500-82702534273305/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:37 compute-0 python3.9[123297]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:02:38 compute-0 python3.9[123418]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025757.1227176-500-85350321572993/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:38 compute-0 python3.9[123568]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:02:38 compute-0 podman[123622]: 2025-10-09 16:02:38.820485863 +0000 UTC m=+0.054364699 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.4)
Oct 09 16:02:39 compute-0 python3.9[123711]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025758.2055833-500-200153106656575/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:39 compute-0 python3.9[123861]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:02:40 compute-0 podman[123956]: 2025-10-09 16:02:40.003536632 +0000 UTC m=+0.059607834 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0)
Oct 09 16:02:40 compute-0 python3.9[123993]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025759.2457166-500-269218430718210/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:40 compute-0 python3.9[124152]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:02:41 compute-0 python3.9[124273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025760.31432-500-85985144735788/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:41 compute-0 python3.9[124423]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:02:42 compute-0 python3.9[124544]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025761.3502946-500-259511277652815/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:42 compute-0 python3.9[124694]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:02:43 compute-0 python3.9[124815]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025762.441757-500-251377844870180/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:45 compute-0 python3.9[124965]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:02:45 compute-0 python3.9[125041]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:46 compute-0 podman[125165]: 2025-10-09 16:02:46.268272479 +0000 UTC m=+0.099423176 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251007, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 16:02:46 compute-0 python3.9[125208]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:02:46 compute-0 python3.9[125293]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:47 compute-0 python3.9[125443]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:02:47 compute-0 python3.9[125519]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:48 compute-0 sudo[125669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjpkjaqfwbmlythxslukhnbmxdtzgxcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025768.0781174-878-261135427940725/AnsiballZ_file.py'
Oct 09 16:02:48 compute-0 sudo[125669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:48 compute-0 python3.9[125671]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:48 compute-0 sudo[125669]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:48 compute-0 sudo[125821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwvoiaeqsfnrbercsfpvhizhlwybludh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025768.729914-894-116800821954331/AnsiballZ_file.py'
Oct 09 16:02:48 compute-0 sudo[125821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:49 compute-0 python3.9[125823]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:49 compute-0 sudo[125821]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:49 compute-0 sudo[125973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsfekgriqetecluddncieswijebhmqtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025769.4087741-910-52159528384679/AnsiballZ_file.py'
Oct 09 16:02:49 compute-0 sudo[125973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:49 compute-0 python3.9[125975]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:02:49 compute-0 sudo[125973]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:50 compute-0 sudo[126125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkbgvoghbtyqybknqwbkzjogakxewecu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025770.0796869-926-112887139800793/AnsiballZ_systemd_service.py'
Oct 09 16:02:50 compute-0 sudo[126125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:50 compute-0 python3.9[126127]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 16:02:50 compute-0 systemd[1]: Reloading.
Oct 09 16:02:50 compute-0 systemd-rc-local-generator[126156]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:02:50 compute-0 systemd-sysv-generator[126159]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:02:50 compute-0 systemd[1]: Listening on Podman API Socket.
Oct 09 16:02:51 compute-0 sudo[126125]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:51 compute-0 sudo[126316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyfyqadwmcfbwatlgmpladshlcgjhrnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025771.4014938-944-22775590227260/AnsiballZ_stat.py'
Oct 09 16:02:51 compute-0 sudo[126316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:51 compute-0 python3.9[126318]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:02:51 compute-0 sudo[126316]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:52 compute-0 sudo[126439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhdvyjwkfwxhvsvyhzfuowiqcrowvwus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025771.4014938-944-22775590227260/AnsiballZ_copy.py'
Oct 09 16:02:52 compute-0 sudo[126439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:52 compute-0 python3.9[126441]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760025771.4014938-944-22775590227260/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:02:52 compute-0 sudo[126439]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:53 compute-0 sudo[126591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzdhhryumuydsvfbioujyamgzotdialn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025772.8058937-978-101490791566244/AnsiballZ_container_config_data.py'
Oct 09 16:02:53 compute-0 sudo[126591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:53 compute-0 python3.9[126593]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Oct 09 16:02:53 compute-0 sudo[126591]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:54 compute-0 sudo[126743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ompsfckasookzckgrlnhzqnqaelhikgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025773.7596242-996-35666292311892/AnsiballZ_container_config_hash.py'
Oct 09 16:02:54 compute-0 sudo[126743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:54 compute-0 python3.9[126745]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 16:02:54 compute-0 sudo[126743]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:55 compute-0 sudo[126895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aysoivakcuiuvjkpzrcovprpjrlehudy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760025774.7555537-1016-100594140389501/AnsiballZ_edpm_container_manage.py'
Oct 09 16:02:55 compute-0 sudo[126895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:55 compute-0 python3[126897]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 16:02:56 compute-0 podman[126909]: 2025-10-09 16:02:56.938501443 +0000 UTC m=+1.332589389 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Oct 09 16:02:57 compute-0 podman[127006]: 2025-10-09 16:02:57.093018398 +0000 UTC m=+0.048112462 container create 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm)
Oct 09 16:02:57 compute-0 podman[127006]: 2025-10-09 16:02:57.067171737 +0000 UTC m=+0.022265831 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Oct 09 16:02:57 compute-0 python3[126897]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Oct 09 16:02:57 compute-0 sudo[126895]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:57 compute-0 sudo[127206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whjekhdurwajhpvqhywpkjrxqmmdywsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025777.3618188-1032-81993358123676/AnsiballZ_stat.py'
Oct 09 16:02:57 compute-0 sudo[127206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:57 compute-0 podman[127169]: 2025-10-09 16:02:57.631230293 +0000 UTC m=+0.067480642 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=watcher_latest, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.4)
Oct 09 16:02:57 compute-0 python3.9[127213]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 16:02:57 compute-0 sudo[127206]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:58 compute-0 sudo[127369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecusehqpqhrozkvbnleaaporezdfqajt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025778.2513113-1050-42212137474208/AnsiballZ_file.py'
Oct 09 16:02:58 compute-0 sudo[127369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:58 compute-0 python3.9[127371]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:58 compute-0 sudo[127369]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:59 compute-0 sudo[127520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wawychuqtgqngzjzqueszatyprwugfze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025778.7617848-1050-7922988206963/AnsiballZ_copy.py'
Oct 09 16:02:59 compute-0 sudo[127520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:02:59 compute-0 python3.9[127522]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760025778.7617848-1050-7922988206963/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:02:59 compute-0 sudo[127520]: pam_unix(sudo:session): session closed for user root
Oct 09 16:02:59 compute-0 sudo[127596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqfuwkyacnwokxvantbvqfqtoclqvtsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025778.7617848-1050-7922988206963/AnsiballZ_systemd.py'
Oct 09 16:02:59 compute-0 sudo[127596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:00 compute-0 python3.9[127598]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 16:03:00 compute-0 systemd[1]: Reloading.
Oct 09 16:03:00 compute-0 systemd-rc-local-generator[127626]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:03:00 compute-0 systemd-sysv-generator[127629]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:03:00 compute-0 sudo[127596]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:00 compute-0 sudo[127707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkgutzhvokmasjmxmglsactfybmdqbbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025778.7617848-1050-7922988206963/AnsiballZ_systemd.py'
Oct 09 16:03:00 compute-0 sudo[127707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:01 compute-0 python3.9[127709]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 16:03:01 compute-0 systemd[1]: Reloading.
Oct 09 16:03:01 compute-0 systemd-rc-local-generator[127739]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:03:01 compute-0 systemd-sysv-generator[127742]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:03:01 compute-0 systemd[1]: Starting podman_exporter container...
Oct 09 16:03:01 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/120fb35d62b6362000d6f268d10aa7c1827bd092fe9aff8a50cfc6abf41a1969/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 09 16:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/120fb35d62b6362000d6f268d10aa7c1827bd092fe9aff8a50cfc6abf41a1969/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 09 16:03:01 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe.
Oct 09 16:03:01 compute-0 podman[127749]: 2025-10-09 16:03:01.531997629 +0000 UTC m=+0.123424190 container init 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:03:01 compute-0 podman_exporter[127764]: ts=2025-10-09T16:03:01.548Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Oct 09 16:03:01 compute-0 podman_exporter[127764]: ts=2025-10-09T16:03:01.548Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Oct 09 16:03:01 compute-0 podman_exporter[127764]: ts=2025-10-09T16:03:01.548Z caller=handler.go:94 level=info msg="enabled collectors"
Oct 09 16:03:01 compute-0 podman_exporter[127764]: ts=2025-10-09T16:03:01.548Z caller=handler.go:105 level=info collector=container
Oct 09 16:03:01 compute-0 podman[127749]: 2025-10-09 16:03:01.560920667 +0000 UTC m=+0.152347208 container start 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 09 16:03:01 compute-0 systemd[1]: Starting Podman API Service...
Oct 09 16:03:01 compute-0 systemd[1]: Started Podman API Service.
Oct 09 16:03:01 compute-0 podman[127749]: podman_exporter
Oct 09 16:03:01 compute-0 systemd[1]: Started podman_exporter container.
Oct 09 16:03:01 compute-0 podman[127775]: time="2025-10-09T16:03:01Z" level=info msg="/usr/bin/podman filtering at log level info"
Oct 09 16:03:01 compute-0 podman[127775]: time="2025-10-09T16:03:01Z" level=info msg="Setting parallel job count to 25"
Oct 09 16:03:01 compute-0 podman[127775]: time="2025-10-09T16:03:01Z" level=info msg="Using sqlite as database backend"
Oct 09 16:03:01 compute-0 podman[127775]: time="2025-10-09T16:03:01Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Oct 09 16:03:01 compute-0 podman[127775]: time="2025-10-09T16:03:01Z" level=info msg="Using systemd socket activation to determine API endpoint"
Oct 09 16:03:01 compute-0 podman[127775]: time="2025-10-09T16:03:01Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Oct 09 16:03:01 compute-0 podman[127775]: @ - - [09/Oct/2025:16:03:01 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Oct 09 16:03:01 compute-0 sudo[127707]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:01 compute-0 podman[127775]: time="2025-10-09T16:03:01Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:03:01 compute-0 podman[127775]: @ - - [09/Oct/2025:16:03:01 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 16535 "" "Go-http-client/1.1"
Oct 09 16:03:01 compute-0 podman_exporter[127764]: ts=2025-10-09T16:03:01.632Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Oct 09 16:03:01 compute-0 podman_exporter[127764]: ts=2025-10-09T16:03:01.634Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Oct 09 16:03:01 compute-0 podman_exporter[127764]: ts=2025-10-09T16:03:01.635Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Oct 09 16:03:01 compute-0 podman[127773]: 2025-10-09 16:03:01.635285514 +0000 UTC m=+0.065008704 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:03:01 compute-0 systemd[1]: 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe-382d00bc7547a470.service: Main process exited, code=exited, status=1/FAILURE
Oct 09 16:03:01 compute-0 systemd[1]: 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe-382d00bc7547a470.service: Failed with result 'exit-code'.
Oct 09 16:03:02 compute-0 sudo[127961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fduiuilinezbeiomusjgouggzmcfjtjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025781.8186696-1098-270071078243576/AnsiballZ_systemd.py'
Oct 09 16:03:02 compute-0 sudo[127961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:02 compute-0 python3.9[127963]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 16:03:02 compute-0 systemd[1]: Stopping podman_exporter container...
Oct 09 16:03:02 compute-0 podman[127775]: @ - - [09/Oct/2025:16:03:01 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Oct 09 16:03:02 compute-0 systemd[1]: libpod-86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe.scope: Deactivated successfully.
Oct 09 16:03:02 compute-0 podman[127967]: 2025-10-09 16:03:02.448835801 +0000 UTC m=+0.039920965 container died 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 09 16:03:02 compute-0 systemd[1]: 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe-382d00bc7547a470.timer: Deactivated successfully.
Oct 09 16:03:02 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe.
Oct 09 16:03:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe-userdata-shm.mount: Deactivated successfully.
Oct 09 16:03:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-120fb35d62b6362000d6f268d10aa7c1827bd092fe9aff8a50cfc6abf41a1969-merged.mount: Deactivated successfully.
Oct 09 16:03:02 compute-0 podman[127967]: 2025-10-09 16:03:02.605317668 +0000 UTC m=+0.196402842 container cleanup 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 09 16:03:02 compute-0 podman[127967]: podman_exporter
Oct 09 16:03:02 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 09 16:03:02 compute-0 podman[127996]: podman_exporter
Oct 09 16:03:02 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Oct 09 16:03:02 compute-0 systemd[1]: Stopped podman_exporter container.
Oct 09 16:03:02 compute-0 systemd[1]: Starting podman_exporter container...
Oct 09 16:03:02 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/120fb35d62b6362000d6f268d10aa7c1827bd092fe9aff8a50cfc6abf41a1969/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 09 16:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/120fb35d62b6362000d6f268d10aa7c1827bd092fe9aff8a50cfc6abf41a1969/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 09 16:03:02 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe.
Oct 09 16:03:02 compute-0 podman[128009]: 2025-10-09 16:03:02.826720626 +0000 UTC m=+0.136883732 container init 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 09 16:03:02 compute-0 podman_exporter[128025]: ts=2025-10-09T16:03:02.837Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Oct 09 16:03:02 compute-0 podman_exporter[128025]: ts=2025-10-09T16:03:02.838Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Oct 09 16:03:02 compute-0 podman_exporter[128025]: ts=2025-10-09T16:03:02.838Z caller=handler.go:94 level=info msg="enabled collectors"
Oct 09 16:03:02 compute-0 podman_exporter[128025]: ts=2025-10-09T16:03:02.838Z caller=handler.go:105 level=info collector=container
Oct 09 16:03:02 compute-0 podman[127775]: @ - - [09/Oct/2025:16:03:02 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Oct 09 16:03:02 compute-0 podman[127775]: time="2025-10-09T16:03:02Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:03:02 compute-0 podman[128009]: 2025-10-09 16:03:02.847046405 +0000 UTC m=+0.157209511 container start 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:03:02 compute-0 podman[128009]: podman_exporter
Oct 09 16:03:02 compute-0 podman[127775]: @ - - [09/Oct/2025:16:03:02 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 16537 "" "Go-http-client/1.1"
Oct 09 16:03:02 compute-0 podman_exporter[128025]: ts=2025-10-09T16:03:02.852Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Oct 09 16:03:02 compute-0 podman_exporter[128025]: ts=2025-10-09T16:03:02.852Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Oct 09 16:03:02 compute-0 podman_exporter[128025]: ts=2025-10-09T16:03:02.853Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Oct 09 16:03:02 compute-0 systemd[1]: Started podman_exporter container.
Oct 09 16:03:02 compute-0 sudo[127961]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:02 compute-0 podman[128035]: 2025-10-09 16:03:02.906524215 +0000 UTC m=+0.049868368 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:03:03 compute-0 sudo[128209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrhtmhfumdxadnmikvxamrzyemfkmnec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025783.2695715-1114-155238078739880/AnsiballZ_stat.py'
Oct 09 16:03:03 compute-0 sudo[128209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:03 compute-0 python3.9[128211]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:03:03 compute-0 sudo[128209]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:04 compute-0 sudo[128332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtriogiaaycpccmgmdhvfmeeygumiato ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025783.2695715-1114-155238078739880/AnsiballZ_copy.py'
Oct 09 16:03:04 compute-0 sudo[128332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:04 compute-0 python3.9[128334]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760025783.2695715-1114-155238078739880/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 09 16:03:04 compute-0 sudo[128332]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:04 compute-0 sudo[128484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrbqwshwopkmkkipgxocbpmfvxdajtjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025784.6447573-1148-213329481416845/AnsiballZ_container_config_data.py'
Oct 09 16:03:04 compute-0 sudo[128484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:05 compute-0 python3.9[128486]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Oct 09 16:03:05 compute-0 sudo[128484]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:05 compute-0 sudo[128636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzxfjaxgjqmkarluwkwpmwcdeteebqau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025785.3729916-1166-60759203079560/AnsiballZ_container_config_hash.py'
Oct 09 16:03:05 compute-0 sudo[128636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:05 compute-0 python3.9[128638]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 16:03:05 compute-0 sudo[128636]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:06 compute-0 sudo[128788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgcdhkmjevjbmmcmaouotbtjolhfsptk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760025786.167382-1186-113462021737151/AnsiballZ_edpm_container_manage.py'
Oct 09 16:03:06 compute-0 sudo[128788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:06 compute-0 python3[128790]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 16:03:08 compute-0 podman[128862]: 2025-10-09 16:03:08.961949068 +0000 UTC m=+0.100275811 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_managed=true, managed_by=edpm_ansible)
Oct 09 16:03:09 compute-0 podman[128804]: 2025-10-09 16:03:09.090984191 +0000 UTC m=+2.345161023 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct 09 16:03:09 compute-0 podman[128920]: 2025-10-09 16:03:09.218259708 +0000 UTC m=+0.045275187 container create 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, version=9.6, vcs-type=git, release=1755695350, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Oct 09 16:03:09 compute-0 podman[128920]: 2025-10-09 16:03:09.193567945 +0000 UTC m=+0.020583454 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct 09 16:03:09 compute-0 python3[128790]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct 09 16:03:09 compute-0 sudo[128788]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:09 compute-0 nova_compute[117331]: 2025-10-09 16:03:09.653 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:03:09 compute-0 nova_compute[117331]: 2025-10-09 16:03:09.654 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:03:09 compute-0 nova_compute[117331]: 2025-10-09 16:03:09.655 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:03:09 compute-0 nova_compute[117331]: 2025-10-09 16:03:09.655 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:03:09 compute-0 nova_compute[117331]: 2025-10-09 16:03:09.655 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:03:09 compute-0 nova_compute[117331]: 2025-10-09 16:03:09.655 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:03:09 compute-0 nova_compute[117331]: 2025-10-09 16:03:09.655 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:03:09 compute-0 sudo[129108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poypmviczjbwcpsbwebhsgiigwftdfsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025789.4817297-1202-86837289772986/AnsiballZ_stat.py'
Oct 09 16:03:09 compute-0 sudo[129108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:09 compute-0 python3.9[129110]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 16:03:09 compute-0 sudo[129108]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:10 compute-0 nova_compute[117331]: 2025-10-09 16:03:10.161 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:03:10 compute-0 nova_compute[117331]: 2025-10-09 16:03:10.162 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:03:10 compute-0 nova_compute[117331]: 2025-10-09 16:03:10.162 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:03:10 compute-0 sudo[129279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dctgilcxaxjvnnqsirhpzjacjtfydzar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025790.2713373-1220-250040612199746/AnsiballZ_file.py'
Oct 09 16:03:10 compute-0 sudo[129279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:10 compute-0 podman[129236]: 2025-10-09 16:03:10.523404187 +0000 UTC m=+0.050148191 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 09 16:03:10 compute-0 nova_compute[117331]: 2025-10-09 16:03:10.672 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:03:10 compute-0 nova_compute[117331]: 2025-10-09 16:03:10.672 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:03:10 compute-0 nova_compute[117331]: 2025-10-09 16:03:10.672 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:03:10 compute-0 nova_compute[117331]: 2025-10-09 16:03:10.673 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:03:10 compute-0 python3.9[129284]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:03:10 compute-0 sudo[129279]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:10 compute-0 nova_compute[117331]: 2025-10-09 16:03:10.796 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:03:10 compute-0 nova_compute[117331]: 2025-10-09 16:03:10.797 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:03:10 compute-0 nova_compute[117331]: 2025-10-09 16:03:10.812 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.015s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:03:10 compute-0 nova_compute[117331]: 2025-10-09 16:03:10.813 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6344MB free_disk=73.27735137939453GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:03:10 compute-0 nova_compute[117331]: 2025-10-09 16:03:10.813 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:03:10 compute-0 nova_compute[117331]: 2025-10-09 16:03:10.813 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:03:11 compute-0 sudo[129434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhctasretmonparpzdckczaegbadbiyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025790.8111596-1220-238549804191597/AnsiballZ_copy.py'
Oct 09 16:03:11 compute-0 sudo[129434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:11 compute-0 python3.9[129436]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760025790.8111596-1220-238549804191597/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:03:11 compute-0 sudo[129434]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:11 compute-0 sudo[129510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-malabhdgplncecymnxcamxjpnxvapbhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025790.8111596-1220-238549804191597/AnsiballZ_systemd.py'
Oct 09 16:03:11 compute-0 sudo[129510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:11 compute-0 nova_compute[117331]: 2025-10-09 16:03:11.854 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:03:11 compute-0 nova_compute[117331]: 2025-10-09 16:03:11.854 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:03:10 up 12 min,  0 user,  load average: 0.86, 0.72, 0.42\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:03:11 compute-0 nova_compute[117331]: 2025-10-09 16:03:11.927 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:03:12 compute-0 python3.9[129512]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 16:03:12 compute-0 systemd[1]: Reloading.
Oct 09 16:03:12 compute-0 systemd-rc-local-generator[129538]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:03:12 compute-0 systemd-sysv-generator[129541]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:03:12 compute-0 sudo[129510]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:12 compute-0 nova_compute[117331]: 2025-10-09 16:03:12.435 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:03:12 compute-0 sudo[129620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubjkadwfycnqnsihhamczgnoeytuzvcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025790.8111596-1220-238549804191597/AnsiballZ_systemd.py'
Oct 09 16:03:12 compute-0 sudo[129620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:12 compute-0 python3.9[129622]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 16:03:12 compute-0 systemd[1]: Reloading.
Oct 09 16:03:12 compute-0 nova_compute[117331]: 2025-10-09 16:03:12.942 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:03:12 compute-0 nova_compute[117331]: 2025-10-09 16:03:12.942 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.129s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:03:12 compute-0 nova_compute[117331]: 2025-10-09 16:03:12.943 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:03:12 compute-0 systemd-rc-local-generator[129650]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:03:12 compute-0 systemd-sysv-generator[129655]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:03:13 compute-0 systemd[1]: Starting openstack_network_exporter container...
Oct 09 16:03:13 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:03:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d87b1cdd56d183ce278c454c18facd87727fbcfe663347ac9f69d72e742dad/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 09 16:03:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d87b1cdd56d183ce278c454c18facd87727fbcfe663347ac9f69d72e742dad/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 09 16:03:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d87b1cdd56d183ce278c454c18facd87727fbcfe663347ac9f69d72e742dad/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 09 16:03:13 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994.
Oct 09 16:03:13 compute-0 podman[129663]: 2025-10-09 16:03:13.307861479 +0000 UTC m=+0.116370582 container init 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 09 16:03:13 compute-0 openstack_network_exporter[129679]: INFO    16:03:13 main.go:48: registering *bridge.Collector
Oct 09 16:03:13 compute-0 openstack_network_exporter[129679]: INFO    16:03:13 main.go:48: registering *coverage.Collector
Oct 09 16:03:13 compute-0 openstack_network_exporter[129679]: INFO    16:03:13 main.go:48: registering *datapath.Collector
Oct 09 16:03:13 compute-0 openstack_network_exporter[129679]: INFO    16:03:13 main.go:48: registering *iface.Collector
Oct 09 16:03:13 compute-0 openstack_network_exporter[129679]: INFO    16:03:13 main.go:48: registering *memory.Collector
Oct 09 16:03:13 compute-0 openstack_network_exporter[129679]: INFO    16:03:13 main.go:48: registering *ovnnorthd.Collector
Oct 09 16:03:13 compute-0 openstack_network_exporter[129679]: INFO    16:03:13 main.go:48: registering *ovn.Collector
Oct 09 16:03:13 compute-0 openstack_network_exporter[129679]: INFO    16:03:13 main.go:48: registering *ovsdbserver.Collector
Oct 09 16:03:13 compute-0 openstack_network_exporter[129679]: INFO    16:03:13 main.go:48: registering *pmd_perf.Collector
Oct 09 16:03:13 compute-0 openstack_network_exporter[129679]: INFO    16:03:13 main.go:48: registering *pmd_rxq.Collector
Oct 09 16:03:13 compute-0 openstack_network_exporter[129679]: INFO    16:03:13 main.go:48: registering *vswitch.Collector
Oct 09 16:03:13 compute-0 openstack_network_exporter[129679]: NOTICE  16:03:13 main.go:76: listening on https://:9105/metrics
Oct 09 16:03:13 compute-0 podman[129663]: 2025-10-09 16:03:13.329753454 +0000 UTC m=+0.138262547 container start 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=)
Oct 09 16:03:13 compute-0 podman[129663]: openstack_network_exporter
Oct 09 16:03:13 compute-0 systemd[1]: Started openstack_network_exporter container.
Oct 09 16:03:13 compute-0 sudo[129620]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:13 compute-0 podman[129689]: 2025-10-09 16:03:13.4087791 +0000 UTC m=+0.069458454 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git)
Oct 09 16:03:13 compute-0 sudo[129861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trduteuwyuovckbochhfdhfpdmmudqfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025793.7301724-1268-29538082481325/AnsiballZ_systemd.py'
Oct 09 16:03:13 compute-0 sudo[129861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:14 compute-0 python3.9[129863]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 16:03:14 compute-0 systemd[1]: Stopping openstack_network_exporter container...
Oct 09 16:03:14 compute-0 systemd[1]: libpod-64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994.scope: Deactivated successfully.
Oct 09 16:03:14 compute-0 podman[129867]: 2025-10-09 16:03:14.357803783 +0000 UTC m=+0.043744018 container died 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, distribution-scope=public, managed_by=edpm_ansible, version=9.6, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, name=ubi9-minimal, config_id=edpm, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter)
Oct 09 16:03:14 compute-0 systemd[1]: 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994-652cc6b3cb0946d7.timer: Deactivated successfully.
Oct 09 16:03:14 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994.
Oct 09 16:03:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994-userdata-shm.mount: Deactivated successfully.
Oct 09 16:03:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7d87b1cdd56d183ce278c454c18facd87727fbcfe663347ac9f69d72e742dad-merged.mount: Deactivated successfully.
Oct 09 16:03:15 compute-0 podman[129867]: 2025-10-09 16:03:15.039700305 +0000 UTC m=+0.725640540 container cleanup 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_id=edpm, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 09 16:03:15 compute-0 podman[129867]: openstack_network_exporter
Oct 09 16:03:15 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 09 16:03:15 compute-0 podman[129899]: openstack_network_exporter
Oct 09 16:03:15 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Oct 09 16:03:15 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Oct 09 16:03:15 compute-0 systemd[1]: Starting openstack_network_exporter container...
Oct 09 16:03:15 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d87b1cdd56d183ce278c454c18facd87727fbcfe663347ac9f69d72e742dad/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 09 16:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d87b1cdd56d183ce278c454c18facd87727fbcfe663347ac9f69d72e742dad/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 09 16:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d87b1cdd56d183ce278c454c18facd87727fbcfe663347ac9f69d72e742dad/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 09 16:03:15 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994.
Oct 09 16:03:15 compute-0 podman[129910]: 2025-10-09 16:03:15.315928985 +0000 UTC m=+0.172157561 container init 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, name=ubi9-minimal, config_id=edpm, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 09 16:03:15 compute-0 openstack_network_exporter[129925]: INFO    16:03:15 main.go:48: registering *bridge.Collector
Oct 09 16:03:15 compute-0 openstack_network_exporter[129925]: INFO    16:03:15 main.go:48: registering *coverage.Collector
Oct 09 16:03:15 compute-0 openstack_network_exporter[129925]: INFO    16:03:15 main.go:48: registering *datapath.Collector
Oct 09 16:03:15 compute-0 openstack_network_exporter[129925]: INFO    16:03:15 main.go:48: registering *iface.Collector
Oct 09 16:03:15 compute-0 openstack_network_exporter[129925]: INFO    16:03:15 main.go:48: registering *memory.Collector
Oct 09 16:03:15 compute-0 openstack_network_exporter[129925]: INFO    16:03:15 main.go:48: registering *ovnnorthd.Collector
Oct 09 16:03:15 compute-0 openstack_network_exporter[129925]: INFO    16:03:15 main.go:48: registering *ovn.Collector
Oct 09 16:03:15 compute-0 openstack_network_exporter[129925]: INFO    16:03:15 main.go:48: registering *ovsdbserver.Collector
Oct 09 16:03:15 compute-0 openstack_network_exporter[129925]: INFO    16:03:15 main.go:48: registering *pmd_perf.Collector
Oct 09 16:03:15 compute-0 openstack_network_exporter[129925]: INFO    16:03:15 main.go:48: registering *pmd_rxq.Collector
Oct 09 16:03:15 compute-0 openstack_network_exporter[129925]: INFO    16:03:15 main.go:48: registering *vswitch.Collector
Oct 09 16:03:15 compute-0 openstack_network_exporter[129925]: NOTICE  16:03:15 main.go:76: listening on https://:9105/metrics
Oct 09 16:03:15 compute-0 podman[129910]: 2025-10-09 16:03:15.345760631 +0000 UTC m=+0.201989197 container start 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41)
Oct 09 16:03:15 compute-0 podman[129910]: openstack_network_exporter
Oct 09 16:03:15 compute-0 systemd[1]: Started openstack_network_exporter container.
Oct 09 16:03:15 compute-0 sudo[129861]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:15 compute-0 podman[129935]: 2025-10-09 16:03:15.410266837 +0000 UTC m=+0.053241490 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., version=9.6, io.openshift.expose-services=)
Oct 09 16:03:15 compute-0 sudo[130106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxehvsvymkkbyaylhhojerhstmwelpcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025795.7333179-1284-250483276916441/AnsiballZ_find.py'
Oct 09 16:03:15 compute-0 sudo[130106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:16 compute-0 python3.9[130108]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 09 16:03:16 compute-0 sudo[130106]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:16 compute-0 podman[130185]: 2025-10-09 16:03:16.834061131 +0000 UTC m=+0.070514987 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 09 16:03:17 compute-0 sudo[130285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yieujoixkzqvfsrrsukzetyvfxvhublr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025796.6028364-1303-154937308562836/AnsiballZ_podman_container_info.py'
Oct 09 16:03:17 compute-0 sudo[130285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:17 compute-0 python3.9[130287]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Oct 09 16:03:17 compute-0 sudo[130285]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:17 compute-0 sudo[130450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skggpipwfeuwwteonfjayobrbmxvulnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025797.4040496-1311-195114816860950/AnsiballZ_podman_container_exec.py'
Oct 09 16:03:17 compute-0 sudo[130450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:18 compute-0 python3.9[130452]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 09 16:03:18 compute-0 systemd[1]: Started libpod-conmon-d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f.scope.
Oct 09 16:03:18 compute-0 podman[130453]: 2025-10-09 16:03:18.196265279 +0000 UTC m=+0.091778982 container exec d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 09 16:03:18 compute-0 podman[130453]: 2025-10-09 16:03:18.233911163 +0000 UTC m=+0.129424836 container exec_died d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, config_id=ovn_controller, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 16:03:18 compute-0 systemd[1]: libpod-conmon-d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f.scope: Deactivated successfully.
Oct 09 16:03:18 compute-0 sudo[130450]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:18 compute-0 sudo[130634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmraiuszwaprscrvowuymjqixrlraetm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025798.4273806-1319-106759449679913/AnsiballZ_podman_container_exec.py'
Oct 09 16:03:18 compute-0 sudo[130634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:18 compute-0 python3.9[130636]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 09 16:03:18 compute-0 systemd[1]: Started libpod-conmon-d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f.scope.
Oct 09 16:03:18 compute-0 podman[130637]: 2025-10-09 16:03:18.978562303 +0000 UTC m=+0.084902374 container exec d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 09 16:03:19 compute-0 podman[130637]: 2025-10-09 16:03:19.013975156 +0000 UTC m=+0.120315217 container exec_died d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 09 16:03:19 compute-0 systemd[1]: libpod-conmon-d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f.scope: Deactivated successfully.
Oct 09 16:03:19 compute-0 sudo[130634]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:19 compute-0 sudo[130819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwwgsqvtjpoqdtjitmvycceivscvdyxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025799.1938043-1327-178778375082535/AnsiballZ_file.py'
Oct 09 16:03:19 compute-0 sudo[130819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:19 compute-0 python3.9[130821]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:03:19 compute-0 sudo[130819]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:20 compute-0 sudo[130971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zppuhyxprdxmzikdfocpwnqxzbwowolh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025799.9248-1336-161231416953299/AnsiballZ_podman_container_info.py'
Oct 09 16:03:20 compute-0 sudo[130971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:20 compute-0 python3.9[130973]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Oct 09 16:03:20 compute-0 sudo[130971]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:20 compute-0 sudo[131137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqnqfzwdchbvbdiafaevqszrjlnfqsym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025800.651855-1344-15901338507759/AnsiballZ_podman_container_exec.py'
Oct 09 16:03:20 compute-0 sudo[131137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:21 compute-0 python3.9[131139]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 09 16:03:21 compute-0 systemd[1]: Started libpod-conmon-0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5.scope.
Oct 09 16:03:21 compute-0 podman[131140]: 2025-10-09 16:03:21.14616448 +0000 UTC m=+0.064185577 container exec 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, config_id=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 09 16:03:21 compute-0 podman[131140]: 2025-10-09 16:03:21.175402696 +0000 UTC m=+0.093423793 container exec_died 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 09 16:03:21 compute-0 systemd[1]: libpod-conmon-0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5.scope: Deactivated successfully.
Oct 09 16:03:21 compute-0 sudo[131137]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:21 compute-0 sudo[131321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grftlwuemjyjvklmvhetvjcqhpmkwmru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025801.3347733-1352-243428428675370/AnsiballZ_podman_container_exec.py'
Oct 09 16:03:21 compute-0 sudo[131321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:21 compute-0 python3.9[131323]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 09 16:03:21 compute-0 systemd[1]: Started libpod-conmon-0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5.scope.
Oct 09 16:03:21 compute-0 podman[131324]: 2025-10-09 16:03:21.84607114 +0000 UTC m=+0.060966705 container exec 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent)
Oct 09 16:03:21 compute-0 podman[131343]: 2025-10-09 16:03:21.906525258 +0000 UTC m=+0.049306115 container exec_died 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 09 16:03:21 compute-0 podman[131324]: 2025-10-09 16:03:21.911855877 +0000 UTC m=+0.126751442 container exec_died 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Oct 09 16:03:21 compute-0 systemd[1]: libpod-conmon-0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5.scope: Deactivated successfully.
Oct 09 16:03:21 compute-0 sudo[131321]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:22 compute-0 sudo[131505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwjnalzvjtvkddzbkbxmgmgbvahuhbev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025802.0940678-1360-192987701797474/AnsiballZ_file.py'
Oct 09 16:03:22 compute-0 sudo[131505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:22 compute-0 python3.9[131507]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:03:22 compute-0 sudo[131505]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:23 compute-0 sudo[131657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atqjgcmpltsbtwifkkgtmmshbzbzagtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025802.805808-1369-247142738346271/AnsiballZ_podman_container_info.py'
Oct 09 16:03:23 compute-0 sudo[131657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:23 compute-0 python3.9[131659]: ansible-containers.podman.podman_container_info Invoked with name=['iscsid'] executable=podman
Oct 09 16:03:23 compute-0 sudo[131657]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:23 compute-0 sudo[131822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icdtftoousvpawpqkjxinicghhvvkdgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025803.4845657-1377-96957567152201/AnsiballZ_podman_container_exec.py'
Oct 09 16:03:23 compute-0 sudo[131822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:23 compute-0 python3.9[131824]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 09 16:03:24 compute-0 systemd[1]: Started libpod-conmon-8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820.scope.
Oct 09 16:03:24 compute-0 podman[131825]: 2025-10-09 16:03:24.025677376 +0000 UTC m=+0.100325303 container exec 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid, config_id=iscsid, io.buildah.version=1.41.4)
Oct 09 16:03:24 compute-0 podman[131844]: 2025-10-09 16:03:24.087548099 +0000 UTC m=+0.051248827 container exec_died 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 09 16:03:24 compute-0 podman[131825]: 2025-10-09 16:03:24.139079375 +0000 UTC m=+0.213727262 container exec_died 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 09 16:03:24 compute-0 systemd[1]: libpod-conmon-8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820.scope: Deactivated successfully.
Oct 09 16:03:24 compute-0 sudo[131822]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:24 compute-0 sudo[132006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjczodydlqtnmnurnorhrkyngshdcdif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025804.3135216-1385-80311755963008/AnsiballZ_podman_container_exec.py'
Oct 09 16:03:24 compute-0 sudo[132006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:24 compute-0 python3.9[132008]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 09 16:03:24 compute-0 systemd[1]: Started libpod-conmon-8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820.scope.
Oct 09 16:03:24 compute-0 podman[132009]: 2025-10-09 16:03:24.850914433 +0000 UTC m=+0.078741119 container exec 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, org.label-schema.schema-version=1.0)
Oct 09 16:03:24 compute-0 podman[132009]: 2025-10-09 16:03:24.885104047 +0000 UTC m=+0.112930693 container exec_died 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:03:24 compute-0 systemd[1]: libpod-conmon-8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820.scope: Deactivated successfully.
Oct 09 16:03:24 compute-0 sudo[132006]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:25 compute-0 sudo[132191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxkcjoitfnlsskbibknzndqvruldzbae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025805.0673704-1393-102783794849150/AnsiballZ_file.py'
Oct 09 16:03:25 compute-0 sudo[132191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:25 compute-0 python3.9[132193]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/iscsid recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:03:25 compute-0 sudo[132191]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:25 compute-0 sudo[132343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvipxsaxxqmtykbvstzdgrvmcvpzfqpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025805.7068038-1402-14031416884509/AnsiballZ_podman_container_info.py'
Oct 09 16:03:25 compute-0 sudo[132343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:26 compute-0 python3.9[132345]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Oct 09 16:03:26 compute-0 sudo[132343]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:26 compute-0 sudo[132508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjgwtwwniwxhfmbpeinzpnzdvkuhcysz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025806.428572-1410-95494955650436/AnsiballZ_podman_container_exec.py'
Oct 09 16:03:26 compute-0 sudo[132508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:26 compute-0 python3.9[132510]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 09 16:03:26 compute-0 systemd[1]: Started libpod-conmon-270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66.scope.
Oct 09 16:03:26 compute-0 podman[132511]: 2025-10-09 16:03:26.955776678 +0000 UTC m=+0.068066709 container exec 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 09 16:03:26 compute-0 podman[132511]: 2025-10-09 16:03:26.984696566 +0000 UTC m=+0.096986577 container exec_died 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 09 16:03:27 compute-0 systemd[1]: libpod-conmon-270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66.scope: Deactivated successfully.
Oct 09 16:03:27 compute-0 sudo[132508]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:27 compute-0 sudo[132692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwwgbeffcywgtucbhwexczkwuagljshq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025807.1754725-1418-233830450113841/AnsiballZ_podman_container_exec.py'
Oct 09 16:03:27 compute-0 sudo[132692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:27 compute-0 python3.9[132694]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 09 16:03:27 compute-0 systemd[1]: Started libpod-conmon-270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66.scope.
Oct 09 16:03:27 compute-0 podman[132695]: 2025-10-09 16:03:27.708317069 +0000 UTC m=+0.065375495 container exec 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=multipathd)
Oct 09 16:03:27 compute-0 podman[132695]: 2025-10-09 16:03:27.75468879 +0000 UTC m=+0.111747216 container exec_died 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Oct 09 16:03:27 compute-0 podman[132712]: 2025-10-09 16:03:27.768214159 +0000 UTC m=+0.055693488 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 09 16:03:27 compute-0 systemd[1]: libpod-conmon-270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66.scope: Deactivated successfully.
Oct 09 16:03:27 compute-0 sudo[132692]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:28 compute-0 sudo[132891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxokgtwiibqlawrkgwvczelpdofticmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025807.9397678-1426-64952157501177/AnsiballZ_file.py'
Oct 09 16:03:28 compute-0 sudo[132891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:28 compute-0 python3.9[132893]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:03:28 compute-0 sudo[132891]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:28 compute-0 sudo[133043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwalnezqspztrklqpcodjcqykfmesbth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025808.580403-1435-100226976107452/AnsiballZ_podman_container_info.py'
Oct 09 16:03:28 compute-0 sudo[133043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:28 compute-0 python3.9[133045]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Oct 09 16:03:29 compute-0 sudo[133043]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:29 compute-0 sudo[133208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epzffvvbweddyzgbfypjbzctyowpbpup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025809.216417-1443-212406223457995/AnsiballZ_podman_container_exec.py'
Oct 09 16:03:29 compute-0 sudo[133208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:29 compute-0 python3.9[133210]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 09 16:03:29 compute-0 systemd[1]: Started libpod-conmon-86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe.scope.
Oct 09 16:03:29 compute-0 podman[133211]: 2025-10-09 16:03:29.729349045 +0000 UTC m=+0.084752929 container exec 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 09 16:03:29 compute-0 podman[133211]: 2025-10-09 16:03:29.762350362 +0000 UTC m=+0.117754256 container exec_died 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:03:29 compute-0 systemd[1]: libpod-conmon-86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe.scope: Deactivated successfully.
Oct 09 16:03:29 compute-0 sudo[133208]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:30 compute-0 sudo[133388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vypuaaejyqiugpciyvackmiljrdbrbve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025810.0290046-1451-75058317631083/AnsiballZ_podman_container_exec.py'
Oct 09 16:03:30 compute-0 sudo[133388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:30 compute-0 python3.9[133390]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 09 16:03:30 compute-0 systemd[1]: Started libpod-conmon-86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe.scope.
Oct 09 16:03:30 compute-0 podman[133391]: 2025-10-09 16:03:30.622192436 +0000 UTC m=+0.073291116 container exec 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:03:30 compute-0 podman[133391]: 2025-10-09 16:03:30.651784184 +0000 UTC m=+0.102882844 container exec_died 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:03:30 compute-0 systemd[1]: libpod-conmon-86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe.scope: Deactivated successfully.
Oct 09 16:03:30 compute-0 sudo[133388]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:31 compute-0 sudo[133573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aftgexcxwwenkrxfzzxtlcmlhbrhdzbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025810.8698635-1459-156971026503358/AnsiballZ_file.py'
Oct 09 16:03:31 compute-0 sudo[133573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:31 compute-0 python3.9[133575]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:03:31 compute-0 sudo[133573]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:31 compute-0 sudo[133725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sggfovqaxavtajtwcpmxtqazfvjikswd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025811.6352847-1468-85251888374656/AnsiballZ_podman_container_info.py'
Oct 09 16:03:31 compute-0 sudo[133725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:32 compute-0 python3.9[133727]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Oct 09 16:03:32 compute-0 sudo[133725]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:32 compute-0 sudo[133890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eynnzqzyfrlaovwangosyvakqcrgscba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025812.2789748-1476-174888952107123/AnsiballZ_podman_container_exec.py'
Oct 09 16:03:32 compute-0 sudo[133890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:32 compute-0 python3.9[133892]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 09 16:03:32 compute-0 systemd[1]: Started libpod-conmon-64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994.scope.
Oct 09 16:03:32 compute-0 podman[133893]: 2025-10-09 16:03:32.822420436 +0000 UTC m=+0.070132195 container exec 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 09 16:03:32 compute-0 podman[133893]: 2025-10-09 16:03:32.85153994 +0000 UTC m=+0.099251699 container exec_died 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal)
Oct 09 16:03:32 compute-0 systemd[1]: libpod-conmon-64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994.scope: Deactivated successfully.
Oct 09 16:03:32 compute-0 sudo[133890]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:33 compute-0 sudo[134087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrwidxpknqdyhrusrgemmitgmdkarzck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025813.032994-1484-228623879274861/AnsiballZ_podman_container_exec.py'
Oct 09 16:03:33 compute-0 sudo[134087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:33 compute-0 podman[134047]: 2025-10-09 16:03:33.354173313 +0000 UTC m=+0.056616047 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:03:33 compute-0 python3.9[134100]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 09 16:03:33 compute-0 systemd[1]: Started libpod-conmon-64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994.scope.
Oct 09 16:03:33 compute-0 podman[134101]: 2025-10-09 16:03:33.623763235 +0000 UTC m=+0.071094487 container exec 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, config_id=edpm, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Oct 09 16:03:33 compute-0 podman[134101]: 2025-10-09 16:03:33.65861949 +0000 UTC m=+0.105950512 container exec_died 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, config_id=edpm, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter)
Oct 09 16:03:33 compute-0 systemd[1]: libpod-conmon-64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994.scope: Deactivated successfully.
Oct 09 16:03:33 compute-0 sudo[134087]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:34 compute-0 sudo[134283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwequhdiubsopjydbihcvghtugsoqhyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025813.8342853-1492-31027994925351/AnsiballZ_file.py'
Oct 09 16:03:34 compute-0 sudo[134283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:34 compute-0 python3.9[134285]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:03:34 compute-0 sudo[134283]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:03:35.270 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:03:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:03:35.270 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:03:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:03:35.270 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:03:39 compute-0 podman[134311]: 2025-10-09 16:03:39.820241985 +0000 UTC m=+0.052993462 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 09 16:03:40 compute-0 podman[134330]: 2025-10-09 16:03:40.838530826 +0000 UTC m=+0.077579242 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:03:45 compute-0 podman[134350]: 2025-10-09 16:03:45.815154102 +0000 UTC m=+0.052953961 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, version=9.6, distribution-scope=public)
Oct 09 16:03:47 compute-0 podman[134370]: 2025-10-09 16:03:47.901151289 +0000 UTC m=+0.135945303 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4)
Oct 09 16:03:54 compute-0 sudo[134521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdfhvjbddyguvmmizlmbaaucnysuvbhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025833.743784-1700-274486195175132/AnsiballZ_file.py'
Oct 09 16:03:54 compute-0 sudo[134521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:54 compute-0 python3.9[134523]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:03:54 compute-0 sudo[134521]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:54 compute-0 sudo[134673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdgeuarsauamgrnarqzqskauqitidpxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025834.4551904-1716-221693127968274/AnsiballZ_stat.py'
Oct 09 16:03:54 compute-0 sudo[134673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:54 compute-0 python3.9[134675]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:03:54 compute-0 sudo[134673]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:55 compute-0 sudo[134796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujoltslmivzgcierlvhpbqppkdtzqhxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025834.4551904-1716-221693127968274/AnsiballZ_copy.py'
Oct 09 16:03:55 compute-0 sudo[134796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:55 compute-0 python3.9[134798]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025834.4551904-1716-221693127968274/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:03:55 compute-0 sudo[134796]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:56 compute-0 sudo[134948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrwvbbluussnlcpveljwnlddewatukni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025835.855181-1748-140052670233410/AnsiballZ_file.py'
Oct 09 16:03:56 compute-0 sudo[134948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:56 compute-0 python3.9[134950]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:03:56 compute-0 sudo[134948]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:56 compute-0 sudo[135100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgdiwghwzlzzwdtlexwsrvlbcckvpjgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025836.5709724-1764-273253125335273/AnsiballZ_stat.py'
Oct 09 16:03:56 compute-0 sudo[135100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:57 compute-0 python3.9[135102]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:03:57 compute-0 sudo[135100]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:57 compute-0 sudo[135178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyrsjaefwyixpmsogzewbpadnahmexys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025836.5709724-1764-273253125335273/AnsiballZ_file.py'
Oct 09 16:03:57 compute-0 sudo[135178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:57 compute-0 python3.9[135180]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:03:57 compute-0 sudo[135178]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:57 compute-0 sudo[135345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fojjexpjxttesdqlqhbyytztsiqpuisj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025837.7069778-1788-98032406082271/AnsiballZ_stat.py'
Oct 09 16:03:58 compute-0 sudo[135345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:58 compute-0 podman[135304]: 2025-10-09 16:03:58.013264593 +0000 UTC m=+0.061643036 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251007, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 09 16:03:58 compute-0 python3.9[135352]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:03:58 compute-0 sudo[135345]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:58 compute-0 sudo[135428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlkziwhqvpnpspasdlszlzizrvnclsqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025837.7069778-1788-98032406082271/AnsiballZ_file.py'
Oct 09 16:03:58 compute-0 sudo[135428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:58 compute-0 python3.9[135430]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.02fhgzh_ recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:03:58 compute-0 sudo[135428]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:59 compute-0 sudo[135580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gidtjilpvwiveagmcxtcphdzrfshpmam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025838.8335593-1812-184794605216199/AnsiballZ_stat.py'
Oct 09 16:03:59 compute-0 sudo[135580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:59 compute-0 python3.9[135582]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:03:59 compute-0 sudo[135580]: pam_unix(sudo:session): session closed for user root
Oct 09 16:03:59 compute-0 sudo[135658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaqbryljpzozjeplkrynkanxcwmicbyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025838.8335593-1812-184794605216199/AnsiballZ_file.py'
Oct 09 16:03:59 compute-0 sudo[135658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:03:59 compute-0 python3.9[135660]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:03:59 compute-0 sudo[135658]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:00 compute-0 sudo[135810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofkveatgdyfdgreddpuzqaiourmbcsvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025840.0036807-1838-80760263991259/AnsiballZ_command.py'
Oct 09 16:04:00 compute-0 sudo[135810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:00 compute-0 python3.9[135812]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 16:04:00 compute-0 sudo[135810]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:01 compute-0 sudo[135963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paaizisoyhldanzyjmwzgozqweaisads ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760025840.7937763-1854-59581105217410/AnsiballZ_edpm_nftables_from_files.py'
Oct 09 16:04:01 compute-0 sudo[135963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:01 compute-0 python3[135965]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 09 16:04:01 compute-0 sudo[135963]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:01 compute-0 sudo[136115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txarerhyvpundklecpkarsmlxyemqcrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025841.6361382-1870-55815009208074/AnsiballZ_stat.py'
Oct 09 16:04:01 compute-0 sudo[136115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:02 compute-0 python3.9[136117]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:04:02 compute-0 sudo[136115]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:02 compute-0 sudo[136193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqzgepskvvamvytevmcexdrxynghtonj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025841.6361382-1870-55815009208074/AnsiballZ_file.py'
Oct 09 16:04:02 compute-0 sudo[136193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:02 compute-0 python3.9[136195]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:04:02 compute-0 sudo[136193]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:03 compute-0 sudo[136345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcxyffzldkgbbjnsmejzvicgcqrudylu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025842.8115885-1894-142963736393913/AnsiballZ_stat.py'
Oct 09 16:04:03 compute-0 sudo[136345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:03 compute-0 python3.9[136347]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:04:03 compute-0 sudo[136345]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:03 compute-0 sudo[136436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fenconiwjopsmavopelwyimvgonnngla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025842.8115885-1894-142963736393913/AnsiballZ_file.py'
Oct 09 16:04:03 compute-0 sudo[136436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:03 compute-0 podman[136397]: 2025-10-09 16:04:03.538227423 +0000 UTC m=+0.073281816 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:04:03 compute-0 nova_compute[117331]: 2025-10-09 16:04:03.592 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:04:03 compute-0 nova_compute[117331]: 2025-10-09 16:04:03.592 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:04:03 compute-0 python3.9[136449]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:04:03 compute-0 sudo[136436]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:04 compute-0 nova_compute[117331]: 2025-10-09 16:04:04.105 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:04:04 compute-0 nova_compute[117331]: 2025-10-09 16:04:04.105 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:04:04 compute-0 nova_compute[117331]: 2025-10-09 16:04:04.105 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:04:04 compute-0 nova_compute[117331]: 2025-10-09 16:04:04.105 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:04:04 compute-0 nova_compute[117331]: 2025-10-09 16:04:04.105 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:04:04 compute-0 nova_compute[117331]: 2025-10-09 16:04:04.105 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:04:04 compute-0 nova_compute[117331]: 2025-10-09 16:04:04.106 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:04:04 compute-0 nova_compute[117331]: 2025-10-09 16:04:04.106 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:04:04 compute-0 sudo[136599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mflvvsfczcoefhnlztbvwvzojcchjurf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025843.9807134-1918-262990455279724/AnsiballZ_stat.py'
Oct 09 16:04:04 compute-0 sudo[136599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:04 compute-0 python3.9[136601]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:04:04 compute-0 sudo[136599]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:04 compute-0 nova_compute[117331]: 2025-10-09 16:04:04.620 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:04:04 compute-0 nova_compute[117331]: 2025-10-09 16:04:04.620 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:04:04 compute-0 nova_compute[117331]: 2025-10-09 16:04:04.620 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:04:04 compute-0 nova_compute[117331]: 2025-10-09 16:04:04.620 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:04:04 compute-0 sudo[136679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlbzvgqxnwmiqussftibfzrhjvuzygzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025843.9807134-1918-262990455279724/AnsiballZ_file.py'
Oct 09 16:04:04 compute-0 sudo[136679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:04 compute-0 nova_compute[117331]: 2025-10-09 16:04:04.752 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:04:04 compute-0 nova_compute[117331]: 2025-10-09 16:04:04.753 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:04:04 compute-0 nova_compute[117331]: 2025-10-09 16:04:04.770 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.017s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:04:04 compute-0 nova_compute[117331]: 2025-10-09 16:04:04.771 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6449MB free_disk=73.30110168457031GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:04:04 compute-0 nova_compute[117331]: 2025-10-09 16:04:04.771 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:04:04 compute-0 nova_compute[117331]: 2025-10-09 16:04:04.771 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:04:04 compute-0 python3.9[136681]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:04:04 compute-0 sudo[136679]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:05 compute-0 sudo[136832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkfeisfxlmgetqixykepqxukixrxpcik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025845.1304789-1942-274229260418890/AnsiballZ_stat.py'
Oct 09 16:04:05 compute-0 sudo[136832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:05 compute-0 python3.9[136834]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:04:05 compute-0 sudo[136832]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:05 compute-0 nova_compute[117331]: 2025-10-09 16:04:05.821 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:04:05 compute-0 nova_compute[117331]: 2025-10-09 16:04:05.821 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:04:04 up 13 min,  0 user,  load average: 0.85, 0.74, 0.45\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:04:05 compute-0 nova_compute[117331]: 2025-10-09 16:04:05.838 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:04:05 compute-0 sudo[136910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wukwoocuckkuremucjsvwelwzbrbimfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025845.1304789-1942-274229260418890/AnsiballZ_file.py'
Oct 09 16:04:05 compute-0 sudo[136910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:06 compute-0 python3.9[136912]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:04:06 compute-0 sudo[136910]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:06 compute-0 nova_compute[117331]: 2025-10-09 16:04:06.348 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:04:06 compute-0 sudo[137062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhlurzvrfqjelzudpxlcigepkhklrjok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025846.253097-1966-135648767052339/AnsiballZ_stat.py'
Oct 09 16:04:06 compute-0 sudo[137062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:06 compute-0 python3.9[137064]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 16:04:06 compute-0 nova_compute[117331]: 2025-10-09 16:04:06.856 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:04:06 compute-0 nova_compute[117331]: 2025-10-09 16:04:06.856 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.085s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:04:06 compute-0 sudo[137062]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:07 compute-0 sudo[137187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csvpblcnjehyetbavpmjyjvojptsjbhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025846.253097-1966-135648767052339/AnsiballZ_copy.py'
Oct 09 16:04:07 compute-0 sudo[137187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:07 compute-0 python3.9[137189]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760025846.253097-1966-135648767052339/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:04:07 compute-0 sudo[137187]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:07 compute-0 sudo[137339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-empimakiejkcfxtnwaakbwmfnpeqrcde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025847.5543847-1996-108739913762947/AnsiballZ_file.py'
Oct 09 16:04:07 compute-0 sudo[137339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:08 compute-0 python3.9[137341]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:04:08 compute-0 sudo[137339]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:08 compute-0 sudo[137491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayfdzqjzywgqatkyzgadgwqasleuzjvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025848.2682185-2012-25905095145336/AnsiballZ_command.py'
Oct 09 16:04:08 compute-0 sudo[137491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:08 compute-0 python3.9[137493]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 16:04:08 compute-0 sudo[137491]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:09 compute-0 sudo[137646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rythbyyzyrhdkidduqimqxdqygokkxxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025848.9689775-2028-77574071566400/AnsiballZ_blockinfile.py'
Oct 09 16:04:09 compute-0 sudo[137646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:09 compute-0 python3.9[137648]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:04:09 compute-0 sudo[137646]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:10 compute-0 sudo[137815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdhyvfmlqxdchkwdjygitrrrzfzmbnci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025849.9072342-2046-47160647632170/AnsiballZ_command.py'
Oct 09 16:04:10 compute-0 sudo[137815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:10 compute-0 podman[137772]: 2025-10-09 16:04:10.163275068 +0000 UTC m=+0.048856621 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 16:04:10 compute-0 python3.9[137820]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 16:04:10 compute-0 sudo[137815]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:10 compute-0 sudo[137971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsudmxcaujfxfkfspynendpwnqjwioax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025850.5569172-2062-265382064280976/AnsiballZ_stat.py'
Oct 09 16:04:10 compute-0 sudo[137971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:10 compute-0 python3.9[137973]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 16:04:10 compute-0 sudo[137971]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:11 compute-0 sudo[138142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdkhbaajxbuqhvzqprfmacdonshoygtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025851.2033138-2078-101992574896876/AnsiballZ_command.py'
Oct 09 16:04:11 compute-0 sudo[138142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:11 compute-0 podman[138099]: 2025-10-09 16:04:11.471077671 +0000 UTC m=+0.054592773 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 09 16:04:11 compute-0 python3.9[138147]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 16:04:11 compute-0 sudo[138142]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:12 compute-0 sudo[138300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhljxivbvmfuxyzprttaezlelluyofdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760025851.9108005-2094-30302087937830/AnsiballZ_file.py'
Oct 09 16:04:12 compute-0 sudo[138300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:12 compute-0 python3.9[138302]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:04:12 compute-0 sudo[138300]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:12 compute-0 sshd-session[117698]: Connection closed by 192.168.122.30 port 60592
Oct 09 16:04:12 compute-0 sshd-session[117695]: pam_unix(sshd:session): session closed for user zuul
Oct 09 16:04:12 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Oct 09 16:04:12 compute-0 systemd[1]: session-11.scope: Consumed 1min 17.686s CPU time.
Oct 09 16:04:12 compute-0 systemd-logind[841]: Session 11 logged out. Waiting for processes to exit.
Oct 09 16:04:12 compute-0 systemd-logind[841]: Removed session 11.
Oct 09 16:04:16 compute-0 podman[138327]: 2025-10-09 16:04:16.816746311 +0000 UTC m=+0.050574809 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vendor=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, distribution-scope=public, vcs-type=git, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Oct 09 16:04:18 compute-0 podman[138348]: 2025-10-09 16:04:18.838490014 +0000 UTC m=+0.073718133 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251007)
Oct 09 16:04:19 compute-0 sshd-session[138374]: Accepted publickey for zuul from 38.102.83.98 port 44404 ssh2: RSA SHA256:rwu8V+okkVqRZd2KVO8nfSAEzATmBOTdXXeo6Lc6rjo
Oct 09 16:04:19 compute-0 systemd-logind[841]: New session 12 of user zuul.
Oct 09 16:04:19 compute-0 systemd[1]: Started Session 12 of User zuul.
Oct 09 16:04:19 compute-0 sshd-session[138374]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 16:04:19 compute-0 sudo[138401]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anurrpadnjhnittcfsqoyavehznntjte ; /usr/bin/python3'
Oct 09 16:04:19 compute-0 sudo[138401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:19 compute-0 python3[138403]: ansible-ansible.legacy.dnf Invoked with name=['nfs-utils', 'iptables'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None state=None
Oct 09 16:04:20 compute-0 sudo[138401]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:20 compute-0 sudo[138428]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrycmdubrkeesxzhkpepnburpwinqzod ; /usr/bin/python3'
Oct 09 16:04:20 compute-0 sudo[138428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:21 compute-0 python3[138430]: ansible-community.general.ini_file Invoked with path=/etc/nfs.conf section=nfsd option=vers3 value=n backup=True mode=0644 state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:04:21 compute-0 sudo[138428]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:21 compute-0 sudo[138456]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxppkmztejscreyfutrojzqusygmkkgs ; /usr/bin/python3'
Oct 09 16:04:21 compute-0 sudo[138456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:21 compute-0 python3[138458]: ansible-ansible.builtin.systemd_service Invoked with name=rpc-statd.service masked=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None
Oct 09 16:04:21 compute-0 systemd[1]: Reloading.
Oct 09 16:04:21 compute-0 systemd-rc-local-generator[138486]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:04:21 compute-0 systemd-sysv-generator[138490]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:04:22 compute-0 sudo[138456]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:22 compute-0 sudo[138519]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxjirupuykalbmewvwlorzwpyqtqirjj ; /usr/bin/python3'
Oct 09 16:04:22 compute-0 sudo[138519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:22 compute-0 python3[138521]: ansible-ansible.builtin.systemd_service Invoked with name=rpcbind.service masked=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None
Oct 09 16:04:22 compute-0 systemd[1]: Reloading.
Oct 09 16:04:22 compute-0 systemd-sysv-generator[138556]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:04:22 compute-0 systemd-rc-local-generator[138552]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:04:22 compute-0 systemd[1]: rpcbind.service: Current command vanished from the unit file, execution of the command list won't be resumed.
Oct 09 16:04:22 compute-0 sudo[138519]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:22 compute-0 sudo[138582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbceamlimaptlgpulcmclhvoidljdhst ; /usr/bin/python3'
Oct 09 16:04:22 compute-0 sudo[138582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:23 compute-0 python3[138584]: ansible-ansible.builtin.systemd_service Invoked with name=rpcbind.socket masked=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None
Oct 09 16:04:23 compute-0 systemd[1]: Reloading.
Oct 09 16:04:23 compute-0 systemd-sysv-generator[138616]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:04:23 compute-0 systemd-rc-local-generator[138612]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:04:23 compute-0 systemd[1]: rpcbind.socket: Socket unit configuration has changed while unit has been running, no open socket file descriptor left. The socket unit is not functional until restarted.
Oct 09 16:04:23 compute-0 sudo[138582]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:23 compute-0 sudo[138644]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sitokqstwarvrgucmjgblbfsyudlmxpg ; /usr/bin/python3'
Oct 09 16:04:23 compute-0 sudo[138644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:23 compute-0 python3[138646]: ansible-ansible.builtin.file Invoked with path=/data/cinder_backend_1 state=directory mode=755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:04:23 compute-0 sudo[138644]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:23 compute-0 sudo[138670]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgarwenznqgrznhjanfqdjmnrqacyktj ; /usr/bin/python3'
Oct 09 16:04:23 compute-0 sudo[138670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:24 compute-0 python3[138672]: ansible-ansible.builtin.file Invoked with path=/data/cinder_backend_2 state=directory mode=755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:04:24 compute-0 sudo[138670]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:24 compute-0 sudo[138696]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epudrfxbyoicebiamrnaxbjdnvniagbf ; /usr/bin/python3'
Oct 09 16:04:24 compute-0 sudo[138696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:24 compute-0 python3[138698]: ansible-ansible.builtin.file Invoked with path=/data/cinderbackup state=directory mode=755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:04:24 compute-0 sudo[138696]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:26 compute-0 sudo[138774]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkmdandukqghbyfvcmdsjwajixokcess ; /usr/bin/python3'
Oct 09 16:04:26 compute-0 sudo[138774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:26 compute-0 python3[138776]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/nfs-server.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 09 16:04:26 compute-0 sudo[138774]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:26 compute-0 sudo[138847]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmtcyzerrodbfzfwytnwtzetkxturuqz ; /usr/bin/python3'
Oct 09 16:04:26 compute-0 sudo[138847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:26 compute-0 python3[138849]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/nfs-server.nft mode=0666 src=/home/zuul/.ansible/tmp/ansible-tmp-1760025866.0130696-33382-195876923547795/source _original_basename=tmpdoeq1brz follow=False checksum=f91e6a2e98f3d3c48705976f5b33f9e81e7cf7f4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:04:26 compute-0 sudo[138847]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:26 compute-0 sudo[138897]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxnwkpobfpktivxmvhublfyzirckuoax ; /usr/bin/python3'
Oct 09 16:04:26 compute-0 sudo[138897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:27 compute-0 python3[138899]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/sysconfig/nftables.conf line=include "/etc/nftables/nfs-server.nft" insertafter=EOF state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:04:27 compute-0 sudo[138897]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:27 compute-0 sudo[138923]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wghgbqozykqffpfwklizvinskcqtqzgk ; /usr/bin/python3'
Oct 09 16:04:27 compute-0 sudo[138923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:27 compute-0 python3[138925]: ansible-ansible.builtin.systemd Invoked with name=nftables state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 16:04:27 compute-0 systemd[1]: Stopping Netfilter Tables...
Oct 09 16:04:27 compute-0 systemd[1]: nftables.service: Deactivated successfully.
Oct 09 16:04:27 compute-0 systemd[1]: Stopped Netfilter Tables.
Oct 09 16:04:27 compute-0 systemd[1]: Starting Netfilter Tables...
Oct 09 16:04:27 compute-0 systemd[1]: Finished Netfilter Tables.
Oct 09 16:04:27 compute-0 sudo[138923]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:28 compute-0 sudo[138953]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npaatzcmipgcxkfadsexolqjdncghsei ; /usr/bin/python3'
Oct 09 16:04:28 compute-0 sudo[138953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:28 compute-0 podman[138955]: 2025-10-09 16:04:28.175638555 +0000 UTC m=+0.063182706 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS)
Oct 09 16:04:28 compute-0 python3[138956]: ansible-community.general.ini_file Invoked with path=/etc/nfs.conf section=nfsd option=host value=172.18.0.101 backup=True mode=0644 state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:04:28 compute-0 sudo[138953]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:28 compute-0 sudo[139002]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znliqqcasphqqyvzwvhnfvlxtjslaazp ; /usr/bin/python3'
Oct 09 16:04:28 compute-0 sudo[139002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:28 compute-0 python3[139004]: ansible-ansible.builtin.systemd Invoked with name=nfs-server state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 16:04:28 compute-0 systemd[1]: Reloading.
Oct 09 16:04:28 compute-0 systemd-rc-local-generator[139034]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 16:04:28 compute-0 systemd-sysv-generator[139037]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 16:04:28 compute-0 systemd[1]: rpcbind.socket: Socket unit configuration has changed while unit has been running, no open socket file descriptor left. The socket unit is not functional until restarted.
Oct 09 16:04:28 compute-0 systemd[1]: Mounting NFSD configuration filesystem...
Oct 09 16:04:28 compute-0 systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 09 16:04:29 compute-0 systemd[1]: Starting NFSv4 ID-name mapping service...
Oct 09 16:04:29 compute-0 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 09 16:04:29 compute-0 rpc.idmapd[139046]: Setting log level to 0
Oct 09 16:04:29 compute-0 systemd[1]: Started NFSv4 ID-name mapping service.
Oct 09 16:04:29 compute-0 systemd[1]: Mounted NFSD configuration filesystem.
Oct 09 16:04:29 compute-0 systemd[1]: Starting NFS Mount Daemon...
Oct 09 16:04:29 compute-0 systemd[1]: Starting NFSv4 Client Tracking Daemon...
Oct 09 16:04:29 compute-0 systemd[1]: Started NFSv4 Client Tracking Daemon.
Oct 09 16:04:29 compute-0 rpc.mountd[139053]: Version 2.5.4 starting
Oct 09 16:04:29 compute-0 systemd[1]: Started NFS Mount Daemon.
Oct 09 16:04:29 compute-0 systemd[1]: Starting NFS server and services...
Oct 09 16:04:29 compute-0 kernel: RPC: Registered rdma transport module.
Oct 09 16:04:29 compute-0 kernel: RPC: Registered rdma backchannel transport module.
Oct 09 16:04:29 compute-0 kernel: NFSD: Using nfsdcld client tracking operations.
Oct 09 16:04:29 compute-0 kernel: NFSD: no clients to reclaim, skipping NFSv4 grace period (net f0000000)
Oct 09 16:04:29 compute-0 systemd[1]: Reloading GSSAPI Proxy Daemon...
Oct 09 16:04:29 compute-0 systemd[1]: Reloaded GSSAPI Proxy Daemon.
Oct 09 16:04:29 compute-0 systemd[1]: Finished NFS server and services.
Oct 09 16:04:29 compute-0 sudo[139002]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:29 compute-0 sudo[139094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrhexhkvxjcwenwrmfouwpfetfhjihuv ; /usr/bin/python3'
Oct 09 16:04:29 compute-0 sudo[139094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:29 compute-0 podman[127775]: time="2025-10-09T16:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:04:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:04:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2991 "" "Go-http-client/1.1"
Oct 09 16:04:29 compute-0 python3[139096]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/exports line=/data/cinder_backend_1 172.18.0.0/24(rw,sync,no_root_squash) state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:04:29 compute-0 sudo[139094]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:29 compute-0 sudo[139120]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfcxuzeswqyvwdojijzgwynewyoldrff ; /usr/bin/python3'
Oct 09 16:04:29 compute-0 sudo[139120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:30 compute-0 python3[139122]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/exports line=/data/cinder_backend_2 172.18.0.0/24(rw,sync,no_root_squash) state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:04:30 compute-0 sudo[139120]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:30 compute-0 sudo[139146]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxbcvlmqknpcpiviedfcpklelbveujzz ; /usr/bin/python3'
Oct 09 16:04:30 compute-0 sudo[139146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:30 compute-0 python3[139148]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/exports line=/data/cinderbackup 172.18.0.0/24(rw,sync,no_root_squash) state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 16:04:30 compute-0 sudo[139146]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:30 compute-0 sudo[139172]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itvratepkujcvkqxrsicqeomrbhtqsws ; /usr/bin/python3'
Oct 09 16:04:30 compute-0 sudo[139172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:04:30 compute-0 python3[139174]: ansible-ansible.legacy.command Invoked with _raw_params=exportfs -a _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 16:04:30 compute-0 sudo[139172]: pam_unix(sudo:session): session closed for user root
Oct 09 16:04:31 compute-0 openstack_network_exporter[129925]: ERROR   16:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:04:31 compute-0 openstack_network_exporter[129925]: ERROR   16:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:04:31 compute-0 openstack_network_exporter[129925]: ERROR   16:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:04:31 compute-0 openstack_network_exporter[129925]: ERROR   16:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:04:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:04:31 compute-0 openstack_network_exporter[129925]: ERROR   16:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:04:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:04:33 compute-0 podman[139180]: 2025-10-09 16:04:33.832149539 +0000 UTC m=+0.062460531 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:04:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:04:35.271 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:04:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:04:35.272 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:04:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:04:35.272 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:04:40 compute-0 podman[139206]: 2025-10-09 16:04:40.812903839 +0000 UTC m=+0.050619951 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct 09 16:04:41 compute-0 podman[139225]: 2025-10-09 16:04:41.834165948 +0000 UTC m=+0.069856576 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 09 16:04:47 compute-0 podman[139246]: 2025-10-09 16:04:47.819246493 +0000 UTC m=+0.057558770 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_id=edpm, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7)
Oct 09 16:04:49 compute-0 podman[139269]: 2025-10-09 16:04:49.857413448 +0000 UTC m=+0.091338145 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Oct 09 16:04:57 compute-0 rpc.mountd[139053]: v4.1 client attached: 0xf5d108f568e7dd0d from "172.18.0.34:795"
Oct 09 16:04:58 compute-0 podman[139304]: 2025-10-09 16:04:58.833183309 +0000 UTC m=+0.060500947 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:04:59 compute-0 podman[127775]: time="2025-10-09T16:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:04:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:04:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3001 "" "Go-http-client/1.1"
Oct 09 16:05:01 compute-0 openstack_network_exporter[129925]: ERROR   16:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:05:01 compute-0 openstack_network_exporter[129925]: ERROR   16:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:05:01 compute-0 openstack_network_exporter[129925]: ERROR   16:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:05:01 compute-0 openstack_network_exporter[129925]: ERROR   16:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:05:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:05:01 compute-0 openstack_network_exporter[129925]: ERROR   16:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:05:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:05:04 compute-0 podman[139327]: 2025-10-09 16:05:04.821469789 +0000 UTC m=+0.054694605 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 09 16:05:06 compute-0 nova_compute[117331]: 2025-10-09 16:05:06.858 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:05:06 compute-0 nova_compute[117331]: 2025-10-09 16:05:06.858 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:05:06 compute-0 nova_compute[117331]: 2025-10-09 16:05:06.859 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:05:06 compute-0 nova_compute[117331]: 2025-10-09 16:05:06.859 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:05:06 compute-0 nova_compute[117331]: 2025-10-09 16:05:06.859 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:05:06 compute-0 nova_compute[117331]: 2025-10-09 16:05:06.859 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:05:06 compute-0 nova_compute[117331]: 2025-10-09 16:05:06.859 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:05:06 compute-0 nova_compute[117331]: 2025-10-09 16:05:06.859 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:05:06 compute-0 nova_compute[117331]: 2025-10-09 16:05:06.859 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:05:07 compute-0 nova_compute[117331]: 2025-10-09 16:05:07.375 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:05:07 compute-0 nova_compute[117331]: 2025-10-09 16:05:07.375 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:05:07 compute-0 nova_compute[117331]: 2025-10-09 16:05:07.375 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:05:07 compute-0 nova_compute[117331]: 2025-10-09 16:05:07.375 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:05:07 compute-0 nova_compute[117331]: 2025-10-09 16:05:07.517 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:05:07 compute-0 nova_compute[117331]: 2025-10-09 16:05:07.518 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:05:07 compute-0 nova_compute[117331]: 2025-10-09 16:05:07.537 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.019s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:05:07 compute-0 nova_compute[117331]: 2025-10-09 16:05:07.537 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6496MB free_disk=73.30141067504883GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:05:07 compute-0 nova_compute[117331]: 2025-10-09 16:05:07.538 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:05:07 compute-0 nova_compute[117331]: 2025-10-09 16:05:07.538 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:05:08 compute-0 nova_compute[117331]: 2025-10-09 16:05:08.633 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:05:08 compute-0 nova_compute[117331]: 2025-10-09 16:05:08.633 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:05:07 up 14 min,  0 user,  load average: 0.48, 0.66, 0.44\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:05:08 compute-0 nova_compute[117331]: 2025-10-09 16:05:08.658 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:05:09 compute-0 nova_compute[117331]: 2025-10-09 16:05:09.165 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:05:09 compute-0 nova_compute[117331]: 2025-10-09 16:05:09.674 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:05:09 compute-0 nova_compute[117331]: 2025-10-09 16:05:09.675 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.137s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:05:11 compute-0 podman[139353]: 2025-10-09 16:05:11.840215898 +0000 UTC m=+0.072303808 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:05:11 compute-0 podman[139371]: 2025-10-09 16:05:11.923583778 +0000 UTC m=+0.055126400 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:05:18 compute-0 podman[139393]: 2025-10-09 16:05:18.815426488 +0000 UTC m=+0.053402244 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.6, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm, release=1755695350, io.openshift.tags=minimal rhel9, name=ubi9-minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 09 16:05:20 compute-0 podman[139415]: 2025-10-09 16:05:20.846350276 +0000 UTC m=+0.082069300 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:05:29 compute-0 podman[127775]: time="2025-10-09T16:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:05:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:05:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3000 "" "Go-http-client/1.1"
Oct 09 16:05:29 compute-0 podman[139442]: 2025-10-09 16:05:29.81910802 +0000 UTC m=+0.048688912 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, container_name=multipathd, tcib_build_tag=watcher_latest)
Oct 09 16:05:31 compute-0 openstack_network_exporter[129925]: ERROR   16:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:05:31 compute-0 openstack_network_exporter[129925]: ERROR   16:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:05:31 compute-0 openstack_network_exporter[129925]: ERROR   16:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:05:31 compute-0 openstack_network_exporter[129925]: ERROR   16:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:05:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:05:31 compute-0 openstack_network_exporter[129925]: ERROR   16:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:05:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:05:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:05:35.273 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:05:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:05:35.273 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:05:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:05:35.273 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:05:35 compute-0 podman[139463]: 2025-10-09 16:05:35.814742029 +0000 UTC m=+0.052654700 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 09 16:05:42 compute-0 podman[139488]: 2025-10-09 16:05:42.813217056 +0000 UTC m=+0.044631343 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:05:42 compute-0 podman[139489]: 2025-10-09 16:05:42.830598966 +0000 UTC m=+0.055508642 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:05:48 compute-0 PackageKit[53823]: daemon quit
Oct 09 16:05:48 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 09 16:05:49 compute-0 podman[139525]: 2025-10-09 16:05:49.82934877 +0000 UTC m=+0.058790249 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350)
Oct 09 16:05:51 compute-0 podman[139549]: 2025-10-09 16:05:51.840122358 +0000 UTC m=+0.076270863 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 09 16:06:00 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Oct 09 16:06:00 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 09 16:06:00 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Oct 09 16:06:00 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 09 16:06:00 compute-0 podman[139576]: 2025-10-09 16:06:00.861313036 +0000 UTC m=+0.078481234 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251007, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2)
Oct 09 16:06:06 compute-0 nova_compute[117331]: 2025-10-09 16:06:06.119 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:06:06 compute-0 nova_compute[117331]: 2025-10-09 16:06:06.120 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:06:06 compute-0 nova_compute[117331]: 2025-10-09 16:06:06.631 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:06:06 compute-0 nova_compute[117331]: 2025-10-09 16:06:06.631 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:06:06 compute-0 nova_compute[117331]: 2025-10-09 16:06:06.632 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:06:06 compute-0 nova_compute[117331]: 2025-10-09 16:06:06.632 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:06:06 compute-0 nova_compute[117331]: 2025-10-09 16:06:06.632 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:06:06 compute-0 nova_compute[117331]: 2025-10-09 16:06:06.632 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:06:06 compute-0 nova_compute[117331]: 2025-10-09 16:06:06.632 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:06:06 compute-0 nova_compute[117331]: 2025-10-09 16:06:06.632 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:06:06 compute-0 podman[139598]: 2025-10-09 16:06:06.825752277 +0000 UTC m=+0.055522563 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 09 16:06:07 compute-0 nova_compute[117331]: 2025-10-09 16:06:07.180 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:06:07 compute-0 nova_compute[117331]: 2025-10-09 16:06:07.180 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:06:07 compute-0 nova_compute[117331]: 2025-10-09 16:06:07.181 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:06:07 compute-0 nova_compute[117331]: 2025-10-09 16:06:07.181 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:06:07 compute-0 nova_compute[117331]: 2025-10-09 16:06:07.331 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:06:07 compute-0 nova_compute[117331]: 2025-10-09 16:06:07.332 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:06:07 compute-0 nova_compute[117331]: 2025-10-09 16:06:07.347 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.015s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:06:07 compute-0 nova_compute[117331]: 2025-10-09 16:06:07.347 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6510MB free_disk=73.30526351928711GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:06:07 compute-0 nova_compute[117331]: 2025-10-09 16:06:07.348 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:06:07 compute-0 nova_compute[117331]: 2025-10-09 16:06:07.348 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:06:08 compute-0 nova_compute[117331]: 2025-10-09 16:06:08.410 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:06:08 compute-0 nova_compute[117331]: 2025-10-09 16:06:08.411 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:06:07 up 15 min,  0 user,  load average: 0.21, 0.56, 0.42\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:06:08 compute-0 nova_compute[117331]: 2025-10-09 16:06:08.428 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:06:08 compute-0 nova_compute[117331]: 2025-10-09 16:06:08.978 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:06:09 compute-0 nova_compute[117331]: 2025-10-09 16:06:09.554 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:06:09 compute-0 nova_compute[117331]: 2025-10-09 16:06:09.554 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.206s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:06:13 compute-0 podman[139624]: 2025-10-09 16:06:13.836439447 +0000 UTC m=+0.067776879 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4)
Oct 09 16:06:13 compute-0 podman[139623]: 2025-10-09 16:06:13.856265817 +0000 UTC m=+0.092390143 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20251007)
Oct 09 16:06:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:15.637 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:06:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:15.639 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:06:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:15.641 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:06:20 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:20.542 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:3e:c9 192.168.122.171'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.122.171/24', 'neutron:device_id': 'ovnmeta-ec7747ab-5c40-4207-84d7-e07e4f8f1795', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec7747ab-5c40-4207-84d7-e07e4f8f1795', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b30e8cf5e10742f190212b4cb97ce2c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bbb0e797-538c-4220-b984-a28bebd03a5a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f4b87b8a-f779-4ce2-964d-335a89070d5e) old=Port_Binding(mac=['fa:16:3e:76:3e:c9'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-ec7747ab-5c40-4207-84d7-e07e4f8f1795', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec7747ab-5c40-4207-84d7-e07e4f8f1795', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b30e8cf5e10742f190212b4cb97ce2c9', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:06:20 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:20.544 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f4b87b8a-f779-4ce2-964d-335a89070d5e in datapath ec7747ab-5c40-4207-84d7-e07e4f8f1795 updated
Oct 09 16:06:20 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:20.546 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ec7747ab-5c40-4207-84d7-e07e4f8f1795, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:06:20 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:20.547 28613 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpluzfbj2e/privsep.sock']
Oct 09 16:06:20 compute-0 podman[139665]: 2025-10-09 16:06:20.808963415 +0000 UTC m=+0.048065252 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, container_name=openstack_network_exporter, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc.)
Oct 09 16:06:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:21.287 28613 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 09 16:06:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:21.287 28613 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpluzfbj2e/privsep.sock __init__ /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:377
Oct 09 16:06:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:21.145 139687 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 09 16:06:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:21.151 139687 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 09 16:06:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:21.155 139687 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Oct 09 16:06:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:21.155 139687 INFO oslo.privsep.daemon [-] privsep daemon running as pid 139687
Oct 09 16:06:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:21.288 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[599f1f7e-e0c9-488d-b0bc-d372c56e6391]: (2,) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:06:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:21.710 139687 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:06:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:21.711 139687 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:06:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:21.711 139687 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:06:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:22.170 139687 INFO oslo_service.backend [-] Loading backend: eventlet
Oct 09 16:06:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:22.176 139687 INFO oslo_service.backend [-] Backend 'eventlet' successfully loaded and cached.
Oct 09 16:06:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:22.214 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[77fd2840-dc74-4237-a56c-87ff36cf3b1a]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:06:22 compute-0 podman[139692]: 2025-10-09 16:06:22.842110278 +0000 UTC m=+0.076173557 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0)
Oct 09 16:06:29 compute-0 podman[127775]: time="2025-10-09T16:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:06:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:06:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3004 "" "Go-http-client/1.1"
Oct 09 16:06:31 compute-0 openstack_network_exporter[129925]: ERROR   16:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:06:31 compute-0 openstack_network_exporter[129925]: ERROR   16:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:06:31 compute-0 openstack_network_exporter[129925]: ERROR   16:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:06:31 compute-0 openstack_network_exporter[129925]: ERROR   16:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:06:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:06:31 compute-0 openstack_network_exporter[129925]: ERROR   16:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:06:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:06:31 compute-0 podman[139719]: 2025-10-09 16:06:31.830165659 +0000 UTC m=+0.056027604 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 09 16:06:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:35.274 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:06:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:35.274 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:06:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:06:35.274 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:06:37 compute-0 podman[139740]: 2025-10-09 16:06:37.81933166 +0000 UTC m=+0.049640259 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:06:44 compute-0 podman[139765]: 2025-10-09 16:06:44.837678823 +0000 UTC m=+0.068278066 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 09 16:06:44 compute-0 podman[139764]: 2025-10-09 16:06:44.844525541 +0000 UTC m=+0.080439374 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4)
Oct 09 16:06:51 compute-0 podman[139801]: 2025-10-09 16:06:51.807731449 +0000 UTC m=+0.046790748 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 09 16:06:53 compute-0 podman[139822]: 2025-10-09 16:06:53.857440449 +0000 UTC m=+0.092882122 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS)
Oct 09 16:06:59 compute-0 podman[127775]: time="2025-10-09T16:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:06:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:06:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3004 "" "Go-http-client/1.1"
Oct 09 16:07:00 compute-0 nova_compute[117331]: 2025-10-09 16:07:00.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:07:00 compute-0 nova_compute[117331]: 2025-10-09 16:07:00.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:07:00 compute-0 nova_compute[117331]: 2025-10-09 16:07:00.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 09 16:07:00 compute-0 nova_compute[117331]: 2025-10-09 16:07:00.816 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 09 16:07:00 compute-0 nova_compute[117331]: 2025-10-09 16:07:00.816 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:07:00 compute-0 nova_compute[117331]: 2025-10-09 16:07:00.816 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 09 16:07:01 compute-0 nova_compute[117331]: 2025-10-09 16:07:01.328 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:07:01 compute-0 openstack_network_exporter[129925]: ERROR   16:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:07:01 compute-0 openstack_network_exporter[129925]: ERROR   16:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:07:01 compute-0 openstack_network_exporter[129925]: ERROR   16:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:07:01 compute-0 openstack_network_exporter[129925]: ERROR   16:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:07:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:07:01 compute-0 openstack_network_exporter[129925]: ERROR   16:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:07:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:07:02 compute-0 sshd-session[139849]: Invalid user  from 134.199.199.215 port 36392
Oct 09 16:07:02 compute-0 podman[139851]: 2025-10-09 16:07:02.866589524 +0000 UTC m=+0.082557943 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd)
Oct 09 16:07:03 compute-0 nova_compute[117331]: 2025-10-09 16:07:03.892 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:07:03 compute-0 nova_compute[117331]: 2025-10-09 16:07:03.892 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:07:04 compute-0 nova_compute[117331]: 2025-10-09 16:07:04.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:07:04 compute-0 nova_compute[117331]: 2025-10-09 16:07:04.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:07:04 compute-0 nova_compute[117331]: 2025-10-09 16:07:04.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:07:04 compute-0 nova_compute[117331]: 2025-10-09 16:07:04.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:07:04 compute-0 nova_compute[117331]: 2025-10-09 16:07:04.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:07:04 compute-0 nova_compute[117331]: 2025-10-09 16:07:04.823 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:07:04 compute-0 nova_compute[117331]: 2025-10-09 16:07:04.824 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:07:04 compute-0 nova_compute[117331]: 2025-10-09 16:07:04.824 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:07:04 compute-0 nova_compute[117331]: 2025-10-09 16:07:04.824 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:07:04 compute-0 nova_compute[117331]: 2025-10-09 16:07:04.954 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:07:04 compute-0 nova_compute[117331]: 2025-10-09 16:07:04.955 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:07:04 compute-0 nova_compute[117331]: 2025-10-09 16:07:04.969 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.015s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:07:04 compute-0 nova_compute[117331]: 2025-10-09 16:07:04.970 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6381MB free_disk=73.30526351928711GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:07:04 compute-0 nova_compute[117331]: 2025-10-09 16:07:04.970 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:07:04 compute-0 nova_compute[117331]: 2025-10-09 16:07:04.970 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:07:06 compute-0 nova_compute[117331]: 2025-10-09 16:07:06.016 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:07:06 compute-0 nova_compute[117331]: 2025-10-09 16:07:06.016 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:07:04 up 16 min,  0 user,  load average: 0.20, 0.49, 0.41\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:07:06 compute-0 nova_compute[117331]: 2025-10-09 16:07:06.038 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:07:06 compute-0 nova_compute[117331]: 2025-10-09 16:07:06.549 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:07:07 compute-0 nova_compute[117331]: 2025-10-09 16:07:07.061 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:07:07 compute-0 nova_compute[117331]: 2025-10-09 16:07:07.062 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.091s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:07:08 compute-0 nova_compute[117331]: 2025-10-09 16:07:08.062 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:07:08 compute-0 podman[139872]: 2025-10-09 16:07:08.8687378 +0000 UTC m=+0.082763100 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:07:09 compute-0 sshd-session[139849]: Connection closed by invalid user  134.199.199.215 port 36392 [preauth]
Oct 09 16:07:15 compute-0 podman[139898]: 2025-10-09 16:07:15.816070928 +0000 UTC m=+0.051119257 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:07:15 compute-0 podman[139897]: 2025-10-09 16:07:15.816074278 +0000 UTC m=+0.052439028 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest)
Oct 09 16:07:22 compute-0 podman[139931]: 2025-10-09 16:07:22.809264494 +0000 UTC m=+0.048741099 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Oct 09 16:07:24 compute-0 podman[139952]: 2025-10-09 16:07:24.875643568 +0000 UTC m=+0.097177090 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Oct 09 16:07:29 compute-0 podman[127775]: time="2025-10-09T16:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:07:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:07:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3006 "" "Go-http-client/1.1"
Oct 09 16:07:31 compute-0 openstack_network_exporter[129925]: ERROR   16:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:07:31 compute-0 openstack_network_exporter[129925]: ERROR   16:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:07:31 compute-0 openstack_network_exporter[129925]: ERROR   16:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:07:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:07:31 compute-0 openstack_network_exporter[129925]: ERROR   16:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:07:31 compute-0 openstack_network_exporter[129925]: ERROR   16:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:07:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:07:33 compute-0 podman[139980]: 2025-10-09 16:07:33.82357501 +0000 UTC m=+0.056063215 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Oct 09 16:07:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:07:35.275 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:07:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:07:35.275 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:07:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:07:35.275 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:07:39 compute-0 podman[140002]: 2025-10-09 16:07:39.834484134 +0000 UTC m=+0.071547130 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:07:46 compute-0 podman[140025]: 2025-10-09 16:07:46.817190037 +0000 UTC m=+0.046396836 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 09 16:07:46 compute-0 podman[140026]: 2025-10-09 16:07:46.828359134 +0000 UTC m=+0.047367146 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=iscsid, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 09 16:07:51 compute-0 sshd-session[140064]: Invalid user testuser from 134.199.199.215 port 42504
Oct 09 16:07:51 compute-0 sshd-session[140064]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:07:51 compute-0 sshd-session[140064]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:07:53 compute-0 sshd-session[140064]: Failed password for invalid user testuser from 134.199.199.215 port 42504 ssh2
Oct 09 16:07:53 compute-0 podman[140066]: 2025-10-09 16:07:53.820462917 +0000 UTC m=+0.053552043 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, config_id=edpm, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal)
Oct 09 16:07:54 compute-0 sshd-session[140088]: Invalid user odoo16 from 134.199.199.215 port 42530
Oct 09 16:07:54 compute-0 sshd-session[140088]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:07:54 compute-0 sshd-session[140088]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:07:55 compute-0 sshd-session[140064]: Connection closed by invalid user testuser 134.199.199.215 port 42504 [preauth]
Oct 09 16:07:55 compute-0 podman[140090]: 2025-10-09 16:07:55.889591557 +0000 UTC m=+0.122637074 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:07:56 compute-0 sshd-session[140088]: Failed password for invalid user odoo16 from 134.199.199.215 port 42530 ssh2
Oct 09 16:07:57 compute-0 sshd-session[140088]: Connection closed by invalid user odoo16 134.199.199.215 port 42530 [preauth]
Oct 09 16:07:58 compute-0 sshd-session[140119]: Invalid user admin from 134.199.199.215 port 60496
Oct 09 16:07:58 compute-0 sshd-session[140119]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:07:58 compute-0 sshd-session[140119]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:07:59 compute-0 podman[127775]: time="2025-10-09T16:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:07:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:07:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3006 "" "Go-http-client/1.1"
Oct 09 16:08:00 compute-0 nova_compute[117331]: 2025-10-09 16:08:00.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:08:00 compute-0 sshd-session[140119]: Failed password for invalid user admin from 134.199.199.215 port 60496 ssh2
Oct 09 16:08:00 compute-0 sshd-session[140119]: Connection closed by invalid user admin 134.199.199.215 port 60496 [preauth]
Oct 09 16:08:01 compute-0 openstack_network_exporter[129925]: ERROR   16:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:08:01 compute-0 openstack_network_exporter[129925]: ERROR   16:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:08:01 compute-0 openstack_network_exporter[129925]: ERROR   16:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:08:01 compute-0 openstack_network_exporter[129925]: ERROR   16:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:08:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:08:01 compute-0 openstack_network_exporter[129925]: ERROR   16:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:08:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:08:01 compute-0 sshd-session[140121]: Invalid user rocky from 134.199.199.215 port 60512
Oct 09 16:08:01 compute-0 sshd-session[140121]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:08:01 compute-0 sshd-session[140121]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:08:03 compute-0 nova_compute[117331]: 2025-10-09 16:08:03.302 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:08:03 compute-0 nova_compute[117331]: 2025-10-09 16:08:03.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:08:03 compute-0 sshd-session[140121]: Failed password for invalid user rocky from 134.199.199.215 port 60512 ssh2
Oct 09 16:08:04 compute-0 sshd-session[140121]: Connection closed by invalid user rocky 134.199.199.215 port 60512 [preauth]
Oct 09 16:08:04 compute-0 nova_compute[117331]: 2025-10-09 16:08:04.302 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:08:04 compute-0 podman[140123]: 2025-10-09 16:08:04.817916568 +0000 UTC m=+0.052009675 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.4)
Oct 09 16:08:04 compute-0 nova_compute[117331]: 2025-10-09 16:08:04.850 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:08:04 compute-0 nova_compute[117331]: 2025-10-09 16:08:04.850 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:08:04 compute-0 nova_compute[117331]: 2025-10-09 16:08:04.850 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:08:05 compute-0 unix_chkpwd[140147]: password check failed for user (root)
Oct 09 16:08:05 compute-0 sshd-session[140145]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:08:05 compute-0 nova_compute[117331]: 2025-10-09 16:08:05.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:08:05 compute-0 nova_compute[117331]: 2025-10-09 16:08:05.819 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:08:05 compute-0 nova_compute[117331]: 2025-10-09 16:08:05.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:08:05 compute-0 nova_compute[117331]: 2025-10-09 16:08:05.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:08:05 compute-0 nova_compute[117331]: 2025-10-09 16:08:05.820 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:08:05 compute-0 nova_compute[117331]: 2025-10-09 16:08:05.949 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:08:05 compute-0 nova_compute[117331]: 2025-10-09 16:08:05.950 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:08:05 compute-0 nova_compute[117331]: 2025-10-09 16:08:05.962 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.013s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:08:05 compute-0 nova_compute[117331]: 2025-10-09 16:08:05.963 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6388MB free_disk=73.30543899536133GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:08:05 compute-0 nova_compute[117331]: 2025-10-09 16:08:05.963 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:08:05 compute-0 nova_compute[117331]: 2025-10-09 16:08:05.963 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:08:06 compute-0 sshd-session[140145]: Failed password for root from 134.199.199.215 port 60528 ssh2
Oct 09 16:08:07 compute-0 nova_compute[117331]: 2025-10-09 16:08:07.045 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:08:07 compute-0 nova_compute[117331]: 2025-10-09 16:08:07.045 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:08:05 up 17 min,  0 user,  load average: 0.07, 0.40, 0.38\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:08:07 compute-0 nova_compute[117331]: 2025-10-09 16:08:07.078 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing inventories for resource provider 593051b8-2000-437f-a915-2616fc8b1671 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 09 16:08:07 compute-0 nova_compute[117331]: 2025-10-09 16:08:07.112 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating ProviderTree inventory for provider 593051b8-2000-437f-a915-2616fc8b1671 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 09 16:08:07 compute-0 nova_compute[117331]: 2025-10-09 16:08:07.112 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating inventory in ProviderTree for provider 593051b8-2000-437f-a915-2616fc8b1671 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 09 16:08:07 compute-0 nova_compute[117331]: 2025-10-09 16:08:07.127 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing aggregate associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 09 16:08:07 compute-0 nova_compute[117331]: 2025-10-09 16:08:07.144 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing trait associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, traits: HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ARCH_X86_64,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_TIS,HW_ARCH_X86_64,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOUND_MODEL_AC97,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_RESCUE_BFV,COMPUTE_SOUND_MODEL_SB16,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 09 16:08:07 compute-0 nova_compute[117331]: 2025-10-09 16:08:07.159 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:08:07 compute-0 sshd-session[140145]: Connection closed by authenticating user root 134.199.199.215 port 60528 [preauth]
Oct 09 16:08:07 compute-0 nova_compute[117331]: 2025-10-09 16:08:07.672 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:08:08 compute-0 nova_compute[117331]: 2025-10-09 16:08:08.186 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:08:08 compute-0 nova_compute[117331]: 2025-10-09 16:08:08.187 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.224s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:08:08 compute-0 sshd-session[140149]: Invalid user guest from 134.199.199.215 port 33560
Oct 09 16:08:08 compute-0 sshd-session[140149]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:08:08 compute-0 sshd-session[140149]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:08:09 compute-0 nova_compute[117331]: 2025-10-09 16:08:09.187 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:08:09 compute-0 nova_compute[117331]: 2025-10-09 16:08:09.188 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:08:10 compute-0 sshd-session[140149]: Failed password for invalid user guest from 134.199.199.215 port 33560 ssh2
Oct 09 16:08:10 compute-0 podman[140151]: 2025-10-09 16:08:10.808052079 +0000 UTC m=+0.045659192 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:08:12 compute-0 sshd-session[140176]: Invalid user dolphinscheduler from 134.199.199.215 port 33562
Oct 09 16:08:12 compute-0 sshd-session[140176]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:08:12 compute-0 sshd-session[140176]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:08:12 compute-0 sshd-session[140149]: Connection closed by invalid user guest 134.199.199.215 port 33560 [preauth]
Oct 09 16:08:13 compute-0 sshd-session[140176]: Failed password for invalid user dolphinscheduler from 134.199.199.215 port 33562 ssh2
Oct 09 16:08:13 compute-0 sshd-session[140176]: Connection closed by invalid user dolphinscheduler 134.199.199.215 port 33562 [preauth]
Oct 09 16:08:15 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:33574 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:08:17 compute-0 podman[140178]: 2025-10-09 16:08:17.808478069 +0000 UTC m=+0.045570729 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 09 16:08:17 compute-0 podman[140179]: 2025-10-09 16:08:17.812122275 +0000 UTC m=+0.046013373 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 09 16:08:18 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:55276 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:08:22 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:55280 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:08:24 compute-0 podman[140216]: 2025-10-09 16:08:24.872863722 +0000 UTC m=+0.095794525 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, version=9.6, io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc.)
Oct 09 16:08:25 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:55292 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:08:26 compute-0 podman[140237]: 2025-10-09 16:08:26.8393605 +0000 UTC m=+0.077446059 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, container_name=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 16:08:28 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:60662 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:08:29 compute-0 podman[127775]: time="2025-10-09T16:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:08:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:08:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3004 "" "Go-http-client/1.1"
Oct 09 16:08:31 compute-0 openstack_network_exporter[129925]: ERROR   16:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:08:31 compute-0 openstack_network_exporter[129925]: ERROR   16:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:08:31 compute-0 openstack_network_exporter[129925]: ERROR   16:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:08:31 compute-0 openstack_network_exporter[129925]: ERROR   16:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:08:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:08:31 compute-0 openstack_network_exporter[129925]: ERROR   16:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:08:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:08:32 compute-0 sshd-session[140264]: Invalid user dev from 134.199.199.215 port 60686
Oct 09 16:08:32 compute-0 sshd-session[140264]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:08:32 compute-0 sshd-session[140264]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:08:34 compute-0 sshd-session[140264]: Failed password for invalid user dev from 134.199.199.215 port 60686 ssh2
Oct 09 16:08:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:08:35.276 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:08:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:08:35.276 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:08:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:08:35.276 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:08:35 compute-0 sshd-session[140264]: Connection closed by invalid user dev 134.199.199.215 port 60686 [preauth]
Oct 09 16:08:35 compute-0 sshd-session[140267]: Invalid user git from 134.199.199.215 port 60696
Oct 09 16:08:35 compute-0 sshd-session[140267]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:08:35 compute-0 sshd-session[140267]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:08:35 compute-0 podman[140269]: 2025-10-09 16:08:35.746144657 +0000 UTC m=+0.046871871 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 09 16:08:37 compute-0 sshd-session[140267]: Failed password for invalid user git from 134.199.199.215 port 60696 ssh2
Oct 09 16:08:37 compute-0 sshd-session[140267]: Connection closed by invalid user git 134.199.199.215 port 60696 [preauth]
Oct 09 16:08:39 compute-0 sshd-session[140289]: Invalid user nvidia from 134.199.199.215 port 49570
Oct 09 16:08:39 compute-0 sshd-session[140289]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:08:39 compute-0 sshd-session[140289]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:08:41 compute-0 sshd-session[140289]: Failed password for invalid user nvidia from 134.199.199.215 port 49570 ssh2
Oct 09 16:08:41 compute-0 podman[140291]: 2025-10-09 16:08:41.824221781 +0000 UTC m=+0.057030549 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:08:42 compute-0 unix_chkpwd[140318]: password check failed for user (root)
Oct 09 16:08:42 compute-0 sshd-session[140316]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:08:42 compute-0 sshd-session[140289]: Connection closed by invalid user nvidia 134.199.199.215 port 49570 [preauth]
Oct 09 16:08:44 compute-0 sshd-session[140316]: Failed password for root from 134.199.199.215 port 49572 ssh2
Oct 09 16:08:44 compute-0 sshd-session[140316]: Connection closed by authenticating user root 134.199.199.215 port 49572 [preauth]
Oct 09 16:08:45 compute-0 sshd-session[140319]: Invalid user test from 134.199.199.215 port 49588
Oct 09 16:08:45 compute-0 sshd-session[140319]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:08:45 compute-0 sshd-session[140319]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:08:48 compute-0 sshd-session[140319]: Failed password for invalid user test from 134.199.199.215 port 49588 ssh2
Oct 09 16:08:48 compute-0 sshd-session[140319]: Connection closed by invalid user test 134.199.199.215 port 49588 [preauth]
Oct 09 16:08:48 compute-0 podman[140322]: 2025-10-09 16:08:48.836151019 +0000 UTC m=+0.069375638 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Oct 09 16:08:48 compute-0 podman[140321]: 2025-10-09 16:08:48.84580708 +0000 UTC m=+0.082674665 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4)
Oct 09 16:08:49 compute-0 sshd-session[140359]: Invalid user hadoop from 134.199.199.215 port 59148
Oct 09 16:08:49 compute-0 sshd-session[140359]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:08:49 compute-0 sshd-session[140359]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:08:51 compute-0 sshd-session[140359]: Failed password for invalid user hadoop from 134.199.199.215 port 59148 ssh2
Oct 09 16:08:52 compute-0 sshd-session[140361]: Invalid user ts from 134.199.199.215 port 59160
Oct 09 16:08:52 compute-0 sshd-session[140361]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:08:52 compute-0 sshd-session[140361]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:08:53 compute-0 sshd-session[140359]: Connection closed by invalid user hadoop 134.199.199.215 port 59148 [preauth]
Oct 09 16:08:54 compute-0 sshd-session[140361]: Failed password for invalid user ts from 134.199.199.215 port 59160 ssh2
Oct 09 16:08:55 compute-0 sshd-session[140361]: Connection closed by invalid user ts 134.199.199.215 port 59160 [preauth]
Oct 09 16:08:55 compute-0 podman[140363]: 2025-10-09 16:08:55.838349492 +0000 UTC m=+0.075109602 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., version=9.6)
Oct 09 16:08:55 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:59172 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:08:57 compute-0 podman[140385]: 2025-10-09 16:08:57.841155095 +0000 UTC m=+0.078287084 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 09 16:08:59 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:44452 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:08:59 compute-0 podman[127775]: time="2025-10-09T16:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:08:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:08:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3003 "" "Go-http-client/1.1"
Oct 09 16:09:01 compute-0 nova_compute[117331]: 2025-10-09 16:09:01.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:09:01 compute-0 openstack_network_exporter[129925]: ERROR   16:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:09:01 compute-0 openstack_network_exporter[129925]: ERROR   16:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:09:01 compute-0 openstack_network_exporter[129925]: ERROR   16:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:09:01 compute-0 openstack_network_exporter[129925]: ERROR   16:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:09:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:09:01 compute-0 openstack_network_exporter[129925]: ERROR   16:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:09:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:09:02 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:44468 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:09:04 compute-0 nova_compute[117331]: 2025-10-09 16:09:04.302 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:09:05 compute-0 nova_compute[117331]: 2025-10-09 16:09:05.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:09:05 compute-0 nova_compute[117331]: 2025-10-09 16:09:05.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:09:06 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:44480 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:09:06 compute-0 nova_compute[117331]: 2025-10-09 16:09:06.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:09:06 compute-0 nova_compute[117331]: 2025-10-09 16:09:06.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:09:06 compute-0 nova_compute[117331]: 2025-10-09 16:09:06.306 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:09:06 compute-0 nova_compute[117331]: 2025-10-09 16:09:06.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:09:06 compute-0 nova_compute[117331]: 2025-10-09 16:09:06.816 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:09:06 compute-0 nova_compute[117331]: 2025-10-09 16:09:06.816 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:09:06 compute-0 nova_compute[117331]: 2025-10-09 16:09:06.816 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:09:06 compute-0 nova_compute[117331]: 2025-10-09 16:09:06.816 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:09:06 compute-0 podman[140413]: 2025-10-09 16:09:06.868504603 +0000 UTC m=+0.093380772 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4)
Oct 09 16:09:06 compute-0 nova_compute[117331]: 2025-10-09 16:09:06.942 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:09:06 compute-0 nova_compute[117331]: 2025-10-09 16:09:06.943 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:09:06 compute-0 nova_compute[117331]: 2025-10-09 16:09:06.956 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.013s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:09:06 compute-0 nova_compute[117331]: 2025-10-09 16:09:06.956 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6395MB free_disk=73.30537796020508GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:09:06 compute-0 nova_compute[117331]: 2025-10-09 16:09:06.956 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:09:06 compute-0 nova_compute[117331]: 2025-10-09 16:09:06.957 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:09:08 compute-0 nova_compute[117331]: 2025-10-09 16:09:08.001 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:09:08 compute-0 nova_compute[117331]: 2025-10-09 16:09:08.002 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:09:06 up 18 min,  0 user,  load average: 0.02, 0.32, 0.35\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:09:08 compute-0 nova_compute[117331]: 2025-10-09 16:09:08.019 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:09:08 compute-0 nova_compute[117331]: 2025-10-09 16:09:08.564 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:09:09 compute-0 nova_compute[117331]: 2025-10-09 16:09:09.173 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:09:09 compute-0 nova_compute[117331]: 2025-10-09 16:09:09.173 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.217s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:09:09 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:46650 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:09:10 compute-0 nova_compute[117331]: 2025-10-09 16:09:10.174 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:09:12 compute-0 podman[140434]: 2025-10-09 16:09:12.820804923 +0000 UTC m=+0.051791890 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:09:13 compute-0 sshd-session[140458]: Invalid user steam from 134.199.199.215 port 46652
Oct 09 16:09:13 compute-0 sshd-session[140458]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:09:13 compute-0 sshd-session[140458]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:09:15 compute-0 sshd-session[140458]: Failed password for invalid user steam from 134.199.199.215 port 46652 ssh2
Oct 09 16:09:15 compute-0 sshd-session[140458]: Connection closed by invalid user steam 134.199.199.215 port 46652 [preauth]
Oct 09 16:09:16 compute-0 sshd-session[140460]: Invalid user user from 134.199.199.215 port 43028
Oct 09 16:09:16 compute-0 sshd-session[140460]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:09:16 compute-0 sshd-session[140460]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:09:18 compute-0 sshd-session[140460]: Failed password for invalid user user from 134.199.199.215 port 43028 ssh2
Oct 09 16:09:19 compute-0 sshd-session[140462]: Invalid user guest from 134.199.199.215 port 43036
Oct 09 16:09:19 compute-0 sshd-session[140462]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:09:19 compute-0 sshd-session[140462]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:09:19 compute-0 podman[140464]: 2025-10-09 16:09:19.724292585 +0000 UTC m=+0.044164305 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 09 16:09:19 compute-0 podman[140465]: 2025-10-09 16:09:19.733308726 +0000 UTC m=+0.050677555 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 09 16:09:20 compute-0 sshd-session[140460]: Connection closed by invalid user user 134.199.199.215 port 43028 [preauth]
Oct 09 16:09:21 compute-0 sshd-session[140462]: Failed password for invalid user guest from 134.199.199.215 port 43036 ssh2
Oct 09 16:09:23 compute-0 sshd-session[140499]: Invalid user support from 134.199.199.215 port 43048
Oct 09 16:09:23 compute-0 sshd-session[140499]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:09:23 compute-0 sshd-session[140499]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:09:23 compute-0 sshd-session[140462]: Connection closed by invalid user guest 134.199.199.215 port 43036 [preauth]
Oct 09 16:09:25 compute-0 sshd-session[140499]: Failed password for invalid user support from 134.199.199.215 port 43048 ssh2
Oct 09 16:09:26 compute-0 sshd-session[140499]: Connection closed by invalid user support 134.199.199.215 port 43048 [preauth]
Oct 09 16:09:26 compute-0 sshd-session[140501]: Invalid user user2 from 134.199.199.215 port 58622
Oct 09 16:09:26 compute-0 sshd-session[140501]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:09:26 compute-0 sshd-session[140501]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:09:26 compute-0 podman[140503]: 2025-10-09 16:09:26.497135486 +0000 UTC m=+0.051994927 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.buildah.version=1.33.7, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 09 16:09:28 compute-0 podman[140524]: 2025-10-09 16:09:28.851323516 +0000 UTC m=+0.085790737 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4, tcib_managed=true)
Oct 09 16:09:28 compute-0 sshd-session[140501]: Failed password for invalid user user2 from 134.199.199.215 port 58622 ssh2
Oct 09 16:09:29 compute-0 podman[127775]: time="2025-10-09T16:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:09:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:09:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3006 "" "Go-http-client/1.1"
Oct 09 16:09:29 compute-0 sshd-session[140552]: Invalid user user1 from 134.199.199.215 port 58634
Oct 09 16:09:29 compute-0 sshd-session[140552]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:09:29 compute-0 sshd-session[140552]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:09:30 compute-0 sshd-session[140501]: Connection closed by invalid user user2 134.199.199.215 port 58622 [preauth]
Oct 09 16:09:30 compute-0 sshd-session[138377]: Received disconnect from 38.102.83.98 port 44404:11: disconnected by user
Oct 09 16:09:30 compute-0 sshd-session[138377]: Disconnected from user zuul 38.102.83.98 port 44404
Oct 09 16:09:30 compute-0 sshd-session[138374]: pam_unix(sshd:session): session closed for user zuul
Oct 09 16:09:30 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Oct 09 16:09:30 compute-0 systemd[1]: session-12.scope: Consumed 6.445s CPU time.
Oct 09 16:09:30 compute-0 systemd-logind[841]: Session 12 logged out. Waiting for processes to exit.
Oct 09 16:09:30 compute-0 systemd-logind[841]: Removed session 12.
Oct 09 16:09:31 compute-0 sshd-session[140552]: Failed password for invalid user user1 from 134.199.199.215 port 58634 ssh2
Oct 09 16:09:31 compute-0 openstack_network_exporter[129925]: ERROR   16:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:09:31 compute-0 openstack_network_exporter[129925]: ERROR   16:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:09:31 compute-0 openstack_network_exporter[129925]: ERROR   16:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:09:31 compute-0 openstack_network_exporter[129925]: ERROR   16:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:09:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:09:31 compute-0 openstack_network_exporter[129925]: ERROR   16:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:09:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:09:32 compute-0 sshd-session[140552]: Connection closed by invalid user user1 134.199.199.215 port 58634 [preauth]
Oct 09 16:09:33 compute-0 sshd-session[140554]: Invalid user app from 134.199.199.215 port 58646
Oct 09 16:09:33 compute-0 sshd-session[140554]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:09:33 compute-0 sshd-session[140554]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:09:34 compute-0 sshd-session[140554]: Failed password for invalid user app from 134.199.199.215 port 58646 ssh2
Oct 09 16:09:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:09:35.277 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:09:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:09:35.277 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:09:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:09:35.277 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:09:36 compute-0 sshd-session[140554]: Connection closed by invalid user app 134.199.199.215 port 58646 [preauth]
Oct 09 16:09:36 compute-0 sshd-session[140557]: Invalid user factorio from 134.199.199.215 port 52436
Oct 09 16:09:36 compute-0 sshd-session[140557]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:09:36 compute-0 sshd-session[140557]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:09:37 compute-0 podman[140559]: 2025-10-09 16:09:37.870734196 +0000 UTC m=+0.085592078 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 09 16:09:38 compute-0 sshd-session[140557]: Failed password for invalid user factorio from 134.199.199.215 port 52436 ssh2
Oct 09 16:09:39 compute-0 sshd-session[140557]: Connection closed by invalid user factorio 134.199.199.215 port 52436 [preauth]
Oct 09 16:09:39 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:52452 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:09:40 compute-0 systemd[1]: Stopping User Manager for UID 1000...
Oct 09 16:09:40 compute-0 systemd[1309]: Activating special unit Exit the Session...
Oct 09 16:09:40 compute-0 systemd[1309]: Removed slice User Background Tasks Slice.
Oct 09 16:09:40 compute-0 systemd[1309]: Stopped target Main User Target.
Oct 09 16:09:40 compute-0 systemd[1309]: Stopped target Basic System.
Oct 09 16:09:40 compute-0 systemd[1309]: Stopped target Paths.
Oct 09 16:09:40 compute-0 systemd[1309]: Stopped target Sockets.
Oct 09 16:09:40 compute-0 systemd[1309]: Stopped target Timers.
Oct 09 16:09:40 compute-0 systemd[1309]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 09 16:09:40 compute-0 systemd[1309]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 09 16:09:40 compute-0 systemd[1309]: Closed D-Bus User Message Bus Socket.
Oct 09 16:09:40 compute-0 systemd[1309]: Stopped Create User's Volatile Files and Directories.
Oct 09 16:09:40 compute-0 systemd[1309]: Removed slice User Application Slice.
Oct 09 16:09:40 compute-0 systemd[1309]: Reached target Shutdown.
Oct 09 16:09:40 compute-0 systemd[1309]: Finished Exit the Session.
Oct 09 16:09:40 compute-0 systemd[1309]: Reached target Exit the Session.
Oct 09 16:09:40 compute-0 systemd[1]: user@1000.service: Deactivated successfully.
Oct 09 16:09:40 compute-0 systemd[1]: Stopped User Manager for UID 1000.
Oct 09 16:09:40 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/1000...
Oct 09 16:09:40 compute-0 systemd[1]: run-user-1000.mount: Deactivated successfully.
Oct 09 16:09:40 compute-0 systemd[1]: user-runtime-dir@1000.service: Deactivated successfully.
Oct 09 16:09:40 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/1000.
Oct 09 16:09:40 compute-0 systemd[1]: Removed slice User Slice of UID 1000.
Oct 09 16:09:40 compute-0 systemd[1]: user-1000.slice: Consumed 8min 32.701s CPU time.
Oct 09 16:09:42 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:52458 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:09:43 compute-0 podman[140583]: 2025-10-09 16:09:43.807115684 +0000 UTC m=+0.045592704 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 09 16:09:46 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:56000 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:09:49 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:56014 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:09:49 compute-0 podman[140608]: 2025-10-09 16:09:49.812186928 +0000 UTC m=+0.048708033 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 09 16:09:49 compute-0 podman[140607]: 2025-10-09 16:09:49.84300897 +0000 UTC m=+0.078063028 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:09:52 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:56022 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:09:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:09:55.679 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:09:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:09:55.680 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:09:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:09:55.681 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:09:56 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:40536 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:09:56 compute-0 podman[140649]: 2025-10-09 16:09:56.819232904 +0000 UTC m=+0.050173480 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc.)
Oct 09 16:09:57 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:09:57.199 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:72:52 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6eccf890-0637-4859-95f7-03ee0bf9c504', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f3fc733922848aa9ddc9d0813f8ba80', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e6fcc7f-9456-459b-adfd-2fc005d8b3e6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=651a592f-942d-4927-af62-51b72afdc9b7) old=Port_Binding(mac=['fa:16:3e:93:72:52'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6eccf890-0637-4859-95f7-03ee0bf9c504', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f3fc733922848aa9ddc9d0813f8ba80', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:09:57 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:09:57.200 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 651a592f-942d-4927-af62-51b72afdc9b7 in datapath 6eccf890-0637-4859-95f7-03ee0bf9c504 updated
Oct 09 16:09:57 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:09:57.201 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6eccf890-0637-4859-95f7-03ee0bf9c504, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:09:57 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:09:57.201 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[56c6bdbe-a2e2-4cab-a161-562faf521dbf]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:09:59 compute-0 podman[127775]: time="2025-10-09T16:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:09:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:09:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3007 "" "Go-http-client/1.1"
Oct 09 16:09:59 compute-0 podman[140673]: 2025-10-09 16:09:59.867240281 +0000 UTC m=+0.092264321 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=ovn_controller)
Oct 09 16:09:59 compute-0 sshd-session[140671]: Invalid user testuser from 134.199.199.215 port 40548
Oct 09 16:09:59 compute-0 sshd-session[140671]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:09:59 compute-0 sshd-session[140671]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:10:01 compute-0 openstack_network_exporter[129925]: ERROR   16:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:10:01 compute-0 openstack_network_exporter[129925]: ERROR   16:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:10:01 compute-0 openstack_network_exporter[129925]: ERROR   16:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:10:01 compute-0 openstack_network_exporter[129925]: ERROR   16:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:10:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:10:01 compute-0 openstack_network_exporter[129925]: ERROR   16:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:10:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:10:01 compute-0 sshd-session[140671]: Failed password for invalid user testuser from 134.199.199.215 port 40548 ssh2
Oct 09 16:10:02 compute-0 nova_compute[117331]: 2025-10-09 16:10:02.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:10:03 compute-0 unix_chkpwd[140703]: password check failed for user (root)
Oct 09 16:10:03 compute-0 sshd-session[140701]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:10:03 compute-0 sshd-session[140671]: Connection closed by invalid user testuser 134.199.199.215 port 40548 [preauth]
Oct 09 16:10:05 compute-0 nova_compute[117331]: 2025-10-09 16:10:05.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:10:05 compute-0 sshd-session[140701]: Failed password for root from 134.199.199.215 port 40556 ssh2
Oct 09 16:10:06 compute-0 nova_compute[117331]: 2025-10-09 16:10:06.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:10:06 compute-0 nova_compute[117331]: 2025-10-09 16:10:06.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:10:06 compute-0 nova_compute[117331]: 2025-10-09 16:10:06.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:10:06 compute-0 nova_compute[117331]: 2025-10-09 16:10:06.306 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:10:06 compute-0 systemd[1]: Starting system activity accounting tool...
Oct 09 16:10:06 compute-0 sshd-session[140704]: Invalid user tomcat from 134.199.199.215 port 57398
Oct 09 16:10:06 compute-0 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 09 16:10:06 compute-0 systemd[1]: Finished system activity accounting tool.
Oct 09 16:10:06 compute-0 sshd-session[140704]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:10:06 compute-0 sshd-session[140704]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:10:07 compute-0 nova_compute[117331]: 2025-10-09 16:10:07.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:10:07 compute-0 nova_compute[117331]: 2025-10-09 16:10:07.308 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:10:07 compute-0 sshd-session[140701]: Connection closed by authenticating user root 134.199.199.215 port 40556 [preauth]
Oct 09 16:10:07 compute-0 nova_compute[117331]: 2025-10-09 16:10:07.822 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:10:07 compute-0 nova_compute[117331]: 2025-10-09 16:10:07.823 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:10:07 compute-0 nova_compute[117331]: 2025-10-09 16:10:07.823 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:10:07 compute-0 nova_compute[117331]: 2025-10-09 16:10:07.823 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:10:07 compute-0 nova_compute[117331]: 2025-10-09 16:10:07.988 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:10:07 compute-0 nova_compute[117331]: 2025-10-09 16:10:07.989 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:10:08 compute-0 nova_compute[117331]: 2025-10-09 16:10:08.003 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.014s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:10:08 compute-0 nova_compute[117331]: 2025-10-09 16:10:08.004 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6413MB free_disk=73.30658340454102GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:10:08 compute-0 nova_compute[117331]: 2025-10-09 16:10:08.004 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:10:08 compute-0 nova_compute[117331]: 2025-10-09 16:10:08.004 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:10:08 compute-0 sshd-session[140704]: Failed password for invalid user tomcat from 134.199.199.215 port 57398 ssh2
Oct 09 16:10:08 compute-0 podman[140708]: 2025-10-09 16:10:08.839942401 +0000 UTC m=+0.062609545 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.4)
Oct 09 16:10:09 compute-0 nova_compute[117331]: 2025-10-09 16:10:09.064 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:10:09 compute-0 nova_compute[117331]: 2025-10-09 16:10:09.064 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:10:08 up 19 min,  0 user,  load average: 0.01, 0.26, 0.32\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:10:09 compute-0 nova_compute[117331]: 2025-10-09 16:10:09.087 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:10:09 compute-0 nova_compute[117331]: 2025-10-09 16:10:09.595 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:10:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:09.668 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:a4:0e 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-cf377fdf-1120-4a27-9469-ffe67088c747', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf377fdf-1120-4a27-9469-ffe67088c747', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb9cfe2a59d34561b954fd278bc3bf0a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5c070bb-f871-416d-92a8-a7ca8f9487bf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=6f6fbe9f-69e1-496f-b387-00e78de80fa8) old=Port_Binding(mac=['fa:16:3e:e0:a4:0e'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-cf377fdf-1120-4a27-9469-ffe67088c747', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf377fdf-1120-4a27-9469-ffe67088c747', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb9cfe2a59d34561b954fd278bc3bf0a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:10:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:09.669 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 6f6fbe9f-69e1-496f-b387-00e78de80fa8 in datapath cf377fdf-1120-4a27-9469-ffe67088c747 updated
Oct 09 16:10:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:09.670 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf377fdf-1120-4a27-9469-ffe67088c747, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:10:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:09.671 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[71ed1418-0b87-43a2-977f-6172484ec4c2]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:09 compute-0 sshd-session[140729]: Invalid user plex from 134.199.199.215 port 57436
Oct 09 16:10:09 compute-0 sshd-session[140729]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:10:09 compute-0 sshd-session[140729]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:10:09 compute-0 sshd-session[140704]: Connection closed by invalid user tomcat 134.199.199.215 port 57398 [preauth]
Oct 09 16:10:10 compute-0 nova_compute[117331]: 2025-10-09 16:10:10.111 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:10:10 compute-0 nova_compute[117331]: 2025-10-09 16:10:10.111 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.107s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:10:11 compute-0 nova_compute[117331]: 2025-10-09 16:10:11.107 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:10:11 compute-0 nova_compute[117331]: 2025-10-09 16:10:11.616 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:10:11 compute-0 sshd-session[140729]: Failed password for invalid user plex from 134.199.199.215 port 57436 ssh2
Oct 09 16:10:12 compute-0 unix_chkpwd[140733]: password check failed for user (root)
Oct 09 16:10:12 compute-0 sshd-session[140731]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 09 16:10:13 compute-0 unix_chkpwd[140736]: password check failed for user (root)
Oct 09 16:10:13 compute-0 sshd-session[140734]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:10:13 compute-0 sshd-session[140729]: Connection closed by invalid user plex 134.199.199.215 port 57436 [preauth]
Oct 09 16:10:14 compute-0 sshd-session[140731]: Failed password for root from 193.46.255.7 port 44610 ssh2
Oct 09 16:10:14 compute-0 unix_chkpwd[140737]: password check failed for user (root)
Oct 09 16:10:14 compute-0 podman[140738]: 2025-10-09 16:10:14.844324856 +0000 UTC m=+0.076217019 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:10:15 compute-0 sshd-session[140734]: Failed password for root from 134.199.199.215 port 57468 ssh2
Oct 09 16:10:16 compute-0 sshd-session[140731]: Failed password for root from 193.46.255.7 port 44610 ssh2
Oct 09 16:10:16 compute-0 sshd-session[140762]: Invalid user oracle from 134.199.199.215 port 40836
Oct 09 16:10:16 compute-0 sshd-session[140762]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:10:16 compute-0 sshd-session[140762]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:10:16 compute-0 unix_chkpwd[140764]: password check failed for user (root)
Oct 09 16:10:17 compute-0 sshd-session[140734]: Connection closed by authenticating user root 134.199.199.215 port 57468 [preauth]
Oct 09 16:10:18 compute-0 sshd-session[140762]: Failed password for invalid user oracle from 134.199.199.215 port 40836 ssh2
Oct 09 16:10:18 compute-0 sshd-session[140731]: Failed password for root from 193.46.255.7 port 44610 ssh2
Oct 09 16:10:18 compute-0 sshd-session[140731]: Received disconnect from 193.46.255.7 port 44610:11:  [preauth]
Oct 09 16:10:18 compute-0 sshd-session[140731]: Disconnected from authenticating user root 193.46.255.7 port 44610 [preauth]
Oct 09 16:10:18 compute-0 sshd-session[140731]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 09 16:10:19 compute-0 unix_chkpwd[140768]: password check failed for user (root)
Oct 09 16:10:19 compute-0 sshd-session[140765]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 09 16:10:19 compute-0 sshd-session[140767]: Invalid user centos from 134.199.199.215 port 40846
Oct 09 16:10:19 compute-0 sshd-session[140767]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:10:19 compute-0 sshd-session[140767]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:10:19 compute-0 podman[140770]: 2025-10-09 16:10:19.902055059 +0000 UTC m=+0.051426279 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 09 16:10:19 compute-0 podman[140782]: 2025-10-09 16:10:19.954113058 +0000 UTC m=+0.056391578 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 09 16:10:20 compute-0 sshd-session[140762]: Connection closed by invalid user oracle 134.199.199.215 port 40836 [preauth]
Oct 09 16:10:21 compute-0 sshd-session[140765]: Failed password for root from 193.46.255.7 port 26872 ssh2
Oct 09 16:10:21 compute-0 unix_chkpwd[140810]: password check failed for user (root)
Oct 09 16:10:21 compute-0 sshd-session[140767]: Failed password for invalid user centos from 134.199.199.215 port 40846 ssh2
Oct 09 16:10:22 compute-0 sshd-session[140767]: Connection closed by invalid user centos 134.199.199.215 port 40846 [preauth]
Oct 09 16:10:23 compute-0 sshd[52903]: drop connection #1 from [134.199.199.215]:40864 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:10:23 compute-0 sshd-session[140765]: Failed password for root from 193.46.255.7 port 26872 ssh2
Oct 09 16:10:24 compute-0 unix_chkpwd[140811]: password check failed for user (root)
Oct 09 16:10:25 compute-0 sshd-session[140765]: Failed password for root from 193.46.255.7 port 26872 ssh2
Oct 09 16:10:26 compute-0 sshd-session[140765]: Received disconnect from 193.46.255.7 port 26872:11:  [preauth]
Oct 09 16:10:26 compute-0 sshd-session[140765]: Disconnected from authenticating user root 193.46.255.7 port 26872 [preauth]
Oct 09 16:10:26 compute-0 sshd-session[140765]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 09 16:10:26 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:53802 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:10:27 compute-0 unix_chkpwd[140814]: password check failed for user (root)
Oct 09 16:10:27 compute-0 sshd-session[140812]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 09 16:10:27 compute-0 podman[140815]: 2025-10-09 16:10:27.860306829 +0000 UTC m=+0.087021402 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, release=1755695350, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, name=ubi9-minimal, distribution-scope=public, build-date=2025-08-20T13:12:41)
Oct 09 16:10:29 compute-0 sshd-session[140812]: Failed password for root from 193.46.255.7 port 26888 ssh2
Oct 09 16:10:29 compute-0 sshd[52903]: drop connection #1 from [134.199.199.215]:53804 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:10:29 compute-0 podman[127775]: time="2025-10-09T16:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:10:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:10:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3009 "" "Go-http-client/1.1"
Oct 09 16:10:30 compute-0 podman[140836]: 2025-10-09 16:10:30.872547867 +0000 UTC m=+0.106430852 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, config_id=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 09 16:10:31 compute-0 openstack_network_exporter[129925]: ERROR   16:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:10:31 compute-0 openstack_network_exporter[129925]: ERROR   16:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:10:31 compute-0 openstack_network_exporter[129925]: ERROR   16:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:10:31 compute-0 openstack_network_exporter[129925]: ERROR   16:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:10:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:10:31 compute-0 openstack_network_exporter[129925]: ERROR   16:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:10:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:10:31 compute-0 unix_chkpwd[140862]: password check failed for user (root)
Oct 09 16:10:32 compute-0 sshd[52903]: drop connection #1 from [134.199.199.215]:53808 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:10:33 compute-0 sshd-session[140812]: Failed password for root from 193.46.255.7 port 26888 ssh2
Oct 09 16:10:33 compute-0 unix_chkpwd[140863]: password check failed for user (root)
Oct 09 16:10:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:35.279 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:10:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:35.280 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:10:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:35.281 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:10:35 compute-0 nova_compute[117331]: 2025-10-09 16:10:35.480 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Acquiring lock "42539348-4e24-48c0-9522-d27691bb1247" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:10:35 compute-0 nova_compute[117331]: 2025-10-09 16:10:35.481 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "42539348-4e24-48c0-9522-d27691bb1247" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:10:35 compute-0 sshd-session[140812]: Failed password for root from 193.46.255.7 port 26888 ssh2
Oct 09 16:10:35 compute-0 nova_compute[117331]: 2025-10-09 16:10:35.986 2 DEBUG nova.compute.manager [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:10:36 compute-0 sshd[52903]: drop connection #1 from [134.199.199.215]:59094 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:10:36 compute-0 nova_compute[117331]: 2025-10-09 16:10:36.574 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:10:36 compute-0 nova_compute[117331]: 2025-10-09 16:10:36.574 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:10:36 compute-0 nova_compute[117331]: 2025-10-09 16:10:36.580 2 DEBUG nova.virt.hardware [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:10:36 compute-0 nova_compute[117331]: 2025-10-09 16:10:36.580 2 INFO nova.compute.claims [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:10:37 compute-0 nova_compute[117331]: 2025-10-09 16:10:37.657 2 DEBUG nova.compute.provider_tree [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:10:38 compute-0 nova_compute[117331]: 2025-10-09 16:10:38.165 2 DEBUG nova.scheduler.client.report [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:10:38 compute-0 sshd-session[140812]: Received disconnect from 193.46.255.7 port 26888:11:  [preauth]
Oct 09 16:10:38 compute-0 sshd-session[140812]: Disconnected from authenticating user root 193.46.255.7 port 26888 [preauth]
Oct 09 16:10:38 compute-0 sshd-session[140812]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 09 16:10:38 compute-0 nova_compute[117331]: 2025-10-09 16:10:38.676 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.102s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:10:38 compute-0 nova_compute[117331]: 2025-10-09 16:10:38.677 2 DEBUG nova.compute.manager [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:10:39 compute-0 nova_compute[117331]: 2025-10-09 16:10:39.187 2 DEBUG nova.compute.manager [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:10:39 compute-0 nova_compute[117331]: 2025-10-09 16:10:39.187 2 DEBUG nova.network.neutron [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:10:39 compute-0 nova_compute[117331]: 2025-10-09 16:10:39.188 2 WARNING neutronclient.v2_0.client [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:10:39 compute-0 nova_compute[117331]: 2025-10-09 16:10:39.189 2 WARNING neutronclient.v2_0.client [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:10:39 compute-0 nova_compute[117331]: 2025-10-09 16:10:39.700 2 INFO nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:10:39 compute-0 podman[140867]: 2025-10-09 16:10:39.825420186 +0000 UTC m=+0.062453273 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 09 16:10:39 compute-0 sshd-session[140865]: Invalid user dmdba from 134.199.199.215 port 59098
Oct 09 16:10:39 compute-0 sshd-session[140865]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:10:39 compute-0 sshd-session[140865]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:10:40 compute-0 nova_compute[117331]: 2025-10-09 16:10:40.207 2 DEBUG nova.compute.manager [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:10:40 compute-0 nova_compute[117331]: 2025-10-09 16:10:40.262 2 DEBUG nova.network.neutron [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Successfully created port: 623a49d8-b0af-4032-871e-8d96af50c4af _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:10:41 compute-0 nova_compute[117331]: 2025-10-09 16:10:41.224 2 DEBUG nova.compute.manager [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:10:41 compute-0 nova_compute[117331]: 2025-10-09 16:10:41.225 2 DEBUG nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:10:41 compute-0 nova_compute[117331]: 2025-10-09 16:10:41.225 2 INFO nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Creating image(s)
Oct 09 16:10:41 compute-0 nova_compute[117331]: 2025-10-09 16:10:41.226 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Acquiring lock "/var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:10:41 compute-0 nova_compute[117331]: 2025-10-09 16:10:41.226 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "/var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:10:41 compute-0 nova_compute[117331]: 2025-10-09 16:10:41.227 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "/var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:10:41 compute-0 nova_compute[117331]: 2025-10-09 16:10:41.227 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:10:41 compute-0 nova_compute[117331]: 2025-10-09 16:10:41.228 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.286 2 DEBUG oslo_utils.imageutils.format_inspector [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'QFI\xfb') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.292 2 DEBUG oslo_utils.imageutils.format_inspector [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.292 2 DEBUG oslo_concurrency.processutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356.part --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:10:42 compute-0 sshd-session[140865]: Failed password for invalid user dmdba from 134.199.199.215 port 59098 ssh2
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.353 2 DEBUG oslo_concurrency.processutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356.part --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.354 2 DEBUG nova.virt.images [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] b7d6e0af-25e4-4227-9dc6-43143898ceee was qcow2, converting to raw fetch_to_raw /usr/lib/python3.12/site-packages/nova/virt/images.py:278
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.356 2 DEBUG nova.privsep.utils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.12/site-packages/nova/privsep/utils.py:63
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.356 2 DEBUG oslo_concurrency.processutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356.part /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356.converted execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.413 2 DEBUG nova.network.neutron [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Successfully updated port: 623a49d8-b0af-4032-871e-8d96af50c4af _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.528 2 DEBUG oslo_concurrency.processutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356.part /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356.converted" returned: 0 in 0.172s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.532 2 DEBUG oslo_concurrency.processutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356.converted --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.608 2 DEBUG oslo_concurrency.processutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356.converted --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.609 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.381s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.609 2 DEBUG oslo_utils.imageutils.format_inspector [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.613 2 DEBUG oslo_utils.imageutils.format_inspector [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.614 2 INFO oslo.privsep.daemon [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpxbv9w14j/privsep.sock']
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.856 2 DEBUG nova.compute.manager [req-890f0bec-e86e-477b-962b-818244517649 req-c3c7b4f0-5451-4820-9aa9-c627814d8028 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Received event network-changed-623a49d8-b0af-4032-871e-8d96af50c4af external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.857 2 DEBUG nova.compute.manager [req-890f0bec-e86e-477b-962b-818244517649 req-c3c7b4f0-5451-4820-9aa9-c627814d8028 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Refreshing instance network info cache due to event network-changed-623a49d8-b0af-4032-871e-8d96af50c4af. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.857 2 DEBUG oslo_concurrency.lockutils [req-890f0bec-e86e-477b-962b-818244517649 req-c3c7b4f0-5451-4820-9aa9-c627814d8028 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-42539348-4e24-48c0-9522-d27691bb1247" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.857 2 DEBUG oslo_concurrency.lockutils [req-890f0bec-e86e-477b-962b-818244517649 req-c3c7b4f0-5451-4820-9aa9-c627814d8028 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-42539348-4e24-48c0-9522-d27691bb1247" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.858 2 DEBUG nova.network.neutron [req-890f0bec-e86e-477b-962b-818244517649 req-c3c7b4f0-5451-4820-9aa9-c627814d8028 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Refreshing network info cache for port 623a49d8-b0af-4032-871e-8d96af50c4af _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:10:42 compute-0 nova_compute[117331]: 2025-10-09 16:10:42.923 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Acquiring lock "refresh_cache-42539348-4e24-48c0-9522-d27691bb1247" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:10:43 compute-0 sshd-session[140865]: Connection closed by invalid user dmdba 134.199.199.215 port 59098 [preauth]
Oct 09 16:10:43 compute-0 sshd-session[140906]: Invalid user rancher from 134.199.199.215 port 59108
Oct 09 16:10:43 compute-0 sshd-session[140906]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:10:43 compute-0 sshd-session[140906]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.336 2 INFO oslo.privsep.daemon [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Spawned new privsep daemon via rootwrap
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.182 64 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.185 64 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.187 64 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.187 64 INFO oslo.privsep.daemon [-] privsep daemon running as pid 64
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.368 2 WARNING neutronclient.v2_0.client [req-890f0bec-e86e-477b-962b-818244517649 req-c3c7b4f0-5451-4820-9aa9-c627814d8028 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.415 2 DEBUG oslo_concurrency.processutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.463 2 DEBUG oslo_concurrency.processutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.464 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.464 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.465 2 DEBUG oslo_utils.imageutils.format_inspector [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.468 2 DEBUG oslo_utils.imageutils.format_inspector [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.468 2 DEBUG oslo_concurrency.processutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.514 2 DEBUG oslo_concurrency.processutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.515 2 DEBUG oslo_concurrency.processutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.542 2 DEBUG oslo_concurrency.processutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/disk 1073741824" returned: 0 in 0.027s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.543 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.079s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.543 2 DEBUG oslo_concurrency.processutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.588 2 DEBUG oslo_concurrency.processutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.589 2 DEBUG nova.virt.disk.api [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Checking if we can resize image /var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.589 2 DEBUG oslo_concurrency.processutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.611 2 DEBUG nova.network.neutron [req-890f0bec-e86e-477b-962b-818244517649 req-c3c7b4f0-5451-4820-9aa9-c627814d8028 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.636 2 DEBUG oslo_concurrency.processutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.637 2 DEBUG nova.virt.disk.api [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Cannot resize image /var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.637 2 DEBUG nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.637 2 DEBUG nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Ensure instance console log exists: /var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.638 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.638 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.638 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:10:43 compute-0 nova_compute[117331]: 2025-10-09 16:10:43.769 2 DEBUG nova.network.neutron [req-890f0bec-e86e-477b-962b-818244517649 req-c3c7b4f0-5451-4820-9aa9-c627814d8028 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:10:44 compute-0 nova_compute[117331]: 2025-10-09 16:10:44.275 2 DEBUG oslo_concurrency.lockutils [req-890f0bec-e86e-477b-962b-818244517649 req-c3c7b4f0-5451-4820-9aa9-c627814d8028 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-42539348-4e24-48c0-9522-d27691bb1247" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:10:44 compute-0 nova_compute[117331]: 2025-10-09 16:10:44.275 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Acquired lock "refresh_cache-42539348-4e24-48c0-9522-d27691bb1247" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:10:44 compute-0 nova_compute[117331]: 2025-10-09 16:10:44.276 2 DEBUG nova.network.neutron [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:10:45 compute-0 sshd-session[140906]: Failed password for invalid user rancher from 134.199.199.215 port 59108 ssh2
Oct 09 16:10:45 compute-0 nova_compute[117331]: 2025-10-09 16:10:45.505 2 DEBUG nova.network.neutron [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:10:45 compute-0 sshd-session[140906]: Connection closed by invalid user rancher 134.199.199.215 port 59108 [preauth]
Oct 09 16:10:45 compute-0 nova_compute[117331]: 2025-10-09 16:10:45.734 2 WARNING neutronclient.v2_0.client [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:10:45 compute-0 podman[140925]: 2025-10-09 16:10:45.849154426 +0000 UTC m=+0.073530966 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.302 2 DEBUG nova.network.neutron [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Updating instance_info_cache with network_info: [{"id": "623a49d8-b0af-4032-871e-8d96af50c4af", "address": "fa:16:3e:b0:41:62", "network": {"id": "6eccf890-0637-4859-95f7-03ee0bf9c504", "bridge": "br-int", "label": "tempest-TestContinuousAudit-2145226265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f3fc733922848aa9ddc9d0813f8ba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623a49d8-b0", "ovs_interfaceid": "623a49d8-b0af-4032-871e-8d96af50c4af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:10:46 compute-0 sshd-session[140947]: Invalid user kingbase from 134.199.199.215 port 35810
Oct 09 16:10:46 compute-0 sshd-session[140947]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:10:46 compute-0 sshd-session[140947]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.810 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Releasing lock "refresh_cache-42539348-4e24-48c0-9522-d27691bb1247" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.810 2 DEBUG nova.compute.manager [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Instance network_info: |[{"id": "623a49d8-b0af-4032-871e-8d96af50c4af", "address": "fa:16:3e:b0:41:62", "network": {"id": "6eccf890-0637-4859-95f7-03ee0bf9c504", "bridge": "br-int", "label": "tempest-TestContinuousAudit-2145226265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f3fc733922848aa9ddc9d0813f8ba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623a49d8-b0", "ovs_interfaceid": "623a49d8-b0af-4032-871e-8d96af50c4af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.813 2 DEBUG nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Start _get_guest_xml network_info=[{"id": "623a49d8-b0af-4032-871e-8d96af50c4af", "address": "fa:16:3e:b0:41:62", "network": {"id": "6eccf890-0637-4859-95f7-03ee0bf9c504", "bridge": "br-int", "label": "tempest-TestContinuousAudit-2145226265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f3fc733922848aa9ddc9d0813f8ba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623a49d8-b0", "ovs_interfaceid": "623a49d8-b0af-4032-871e-8d96af50c4af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.818 2 WARNING nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.820 2 DEBUG nova.virt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestContinuousAudit-server-377185348', uuid='42539348-4e24-48c0-9522-d27691bb1247'), owner=OwnerMeta(userid='79bd7ccf35e1491a9ecd7213db027ff8', username='tempest-TestContinuousAudit-128857979-project-admin', projectid='cb9cfe2a59d34561b954fd278bc3bf0a', projectname='tempest-TestContinuousAudit-128857979'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "623a49d8-b0af-4032-871e-8d96af50c4af", "address": "fa:16:3e:b0:41:62", "network": {"id": "6eccf890-0637-4859-95f7-03ee0bf9c504", "bridge": "br-int", "label": "tempest-TestContinuousAudit-2145226265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f3fc733922848aa9ddc9d0813f8ba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623a49d8-b0", "ovs_interfaceid": "623a49d8-b0af-4032-871e-8d96af50c4af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760026246.820791) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.825 2 DEBUG nova.virt.libvirt.host [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.826 2 DEBUG nova.virt.libvirt.host [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.829 2 DEBUG nova.virt.libvirt.host [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.830 2 DEBUG nova.virt.libvirt.host [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.830 2 DEBUG nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.831 2 DEBUG nova.virt.hardware [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.831 2 DEBUG nova.virt.hardware [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.831 2 DEBUG nova.virt.hardware [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.832 2 DEBUG nova.virt.hardware [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.832 2 DEBUG nova.virt.hardware [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.832 2 DEBUG nova.virt.hardware [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.833 2 DEBUG nova.virt.hardware [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.833 2 DEBUG nova.virt.hardware [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.833 2 DEBUG nova.virt.hardware [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.833 2 DEBUG nova.virt.hardware [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.834 2 DEBUG nova.virt.hardware [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.839 2 DEBUG nova.privsep.utils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.12/site-packages/nova/privsep/utils.py:63
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.840 2 DEBUG nova.virt.libvirt.vif [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:10:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestContinuousAudit-server-377185348',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testcontinuousaudit-server-377185348',id=1,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb9cfe2a59d34561b954fd278bc3bf0a',ramdisk_id='',reservation_id='r-9f9hvutc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestContinuousAudit-128857979',owner_user_name='tempest-TestContinuousAudit-128857979-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:10:40Z,user_data=None,user_id='79bd7ccf35e1491a9ecd7213db027ff8',uuid=42539348-4e24-48c0-9522-d27691bb1247,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "623a49d8-b0af-4032-871e-8d96af50c4af", "address": "fa:16:3e:b0:41:62", "network": {"id": "6eccf890-0637-4859-95f7-03ee0bf9c504", "bridge": "br-int", "label": "tempest-TestContinuousAudit-2145226265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f3fc733922848aa9ddc9d0813f8ba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623a49d8-b0", "ovs_interfaceid": "623a49d8-b0af-4032-871e-8d96af50c4af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.840 2 DEBUG nova.network.os_vif_util [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Converting VIF {"id": "623a49d8-b0af-4032-871e-8d96af50c4af", "address": "fa:16:3e:b0:41:62", "network": {"id": "6eccf890-0637-4859-95f7-03ee0bf9c504", "bridge": "br-int", "label": "tempest-TestContinuousAudit-2145226265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f3fc733922848aa9ddc9d0813f8ba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623a49d8-b0", "ovs_interfaceid": "623a49d8-b0af-4032-871e-8d96af50c4af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.842 2 DEBUG nova.network.os_vif_util [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:41:62,bridge_name='br-int',has_traffic_filtering=True,id=623a49d8-b0af-4032-871e-8d96af50c4af,network=Network(6eccf890-0637-4859-95f7-03ee0bf9c504),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap623a49d8-b0') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:10:46 compute-0 nova_compute[117331]: 2025-10-09 16:10:46.843 2 DEBUG nova.objects.instance [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lazy-loading 'pci_devices' on Instance uuid 42539348-4e24-48c0-9522-d27691bb1247 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.354 2 DEBUG nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:10:47 compute-0 nova_compute[117331]:   <uuid>42539348-4e24-48c0-9522-d27691bb1247</uuid>
Oct 09 16:10:47 compute-0 nova_compute[117331]:   <name>instance-00000001</name>
Oct 09 16:10:47 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:10:47 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:10:47 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <nova:name>tempest-TestContinuousAudit-server-377185348</nova:name>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:10:46</nova:creationTime>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:10:47 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:10:47 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:10:47 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:10:47 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:10:47 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:10:47 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:10:47 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:10:47 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:10:47 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:10:47 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:10:47 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:10:47 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:10:47 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:10:47 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:10:47 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:10:47 compute-0 nova_compute[117331]:         <nova:user uuid="79bd7ccf35e1491a9ecd7213db027ff8">tempest-TestContinuousAudit-128857979-project-admin</nova:user>
Oct 09 16:10:47 compute-0 nova_compute[117331]:         <nova:project uuid="cb9cfe2a59d34561b954fd278bc3bf0a">tempest-TestContinuousAudit-128857979</nova:project>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:10:47 compute-0 nova_compute[117331]:         <nova:port uuid="623a49d8-b0af-4032-871e-8d96af50c4af">
Oct 09 16:10:47 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:10:47 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:10:47 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <system>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <entry name="serial">42539348-4e24-48c0-9522-d27691bb1247</entry>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <entry name="uuid">42539348-4e24-48c0-9522-d27691bb1247</entry>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     </system>
Oct 09 16:10:47 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:10:47 compute-0 nova_compute[117331]:   <os>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:   </os>
Oct 09 16:10:47 compute-0 nova_compute[117331]:   <features>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:   </features>
Oct 09 16:10:47 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:10:47 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:10:47 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/disk"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/disk.config"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:b0:41:62"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <target dev="tap623a49d8-b0"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/console.log" append="off"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <video>
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     </video>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:10:47 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:10:47 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:10:47 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:10:47 compute-0 nova_compute[117331]: </domain>
Oct 09 16:10:47 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.356 2 DEBUG nova.compute.manager [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Preparing to wait for external event network-vif-plugged-623a49d8-b0af-4032-871e-8d96af50c4af prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.356 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Acquiring lock "42539348-4e24-48c0-9522-d27691bb1247-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.356 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "42539348-4e24-48c0-9522-d27691bb1247-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.356 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "42539348-4e24-48c0-9522-d27691bb1247-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.357 2 DEBUG nova.virt.libvirt.vif [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:10:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestContinuousAudit-server-377185348',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testcontinuousaudit-server-377185348',id=1,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb9cfe2a59d34561b954fd278bc3bf0a',ramdisk_id='',reservation_id='r-9f9hvutc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestContinuousAudit-128857979',owner_user_name='tempest-TestContinuousAudit-128857979-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:10:40Z,user_data=None,user_id='79bd7ccf35e1491a9ecd7213db027ff8',uuid=42539348-4e24-48c0-9522-d27691bb1247,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "623a49d8-b0af-4032-871e-8d96af50c4af", "address": "fa:16:3e:b0:41:62", "network": {"id": "6eccf890-0637-4859-95f7-03ee0bf9c504", "bridge": "br-int", "label": "tempest-TestContinuousAudit-2145226265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f3fc733922848aa9ddc9d0813f8ba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623a49d8-b0", "ovs_interfaceid": "623a49d8-b0af-4032-871e-8d96af50c4af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.358 2 DEBUG nova.network.os_vif_util [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Converting VIF {"id": "623a49d8-b0af-4032-871e-8d96af50c4af", "address": "fa:16:3e:b0:41:62", "network": {"id": "6eccf890-0637-4859-95f7-03ee0bf9c504", "bridge": "br-int", "label": "tempest-TestContinuousAudit-2145226265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f3fc733922848aa9ddc9d0813f8ba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623a49d8-b0", "ovs_interfaceid": "623a49d8-b0af-4032-871e-8d96af50c4af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.358 2 DEBUG nova.network.os_vif_util [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:41:62,bridge_name='br-int',has_traffic_filtering=True,id=623a49d8-b0af-4032-871e-8d96af50c4af,network=Network(6eccf890-0637-4859-95f7-03ee0bf9c504),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap623a49d8-b0') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.358 2 DEBUG os_vif [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:41:62,bridge_name='br-int',has_traffic_filtering=True,id=623a49d8-b0af-4032-871e-8d96af50c4af,network=Network(6eccf890-0637-4859-95f7-03ee0bf9c504),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap623a49d8-b0') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.537 2 DEBUG ovsdbapp.backend.ovs_idl [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.539 2 DEBUG ovsdbapp.backend.ovs_idl [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.539 2 DEBUG ovsdbapp.backend.ovs_idl [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.12/site-packages/ovs/reconnect.py:519
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [POLLOUT] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.12/site-packages/ovs/reconnect.py:519
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.554 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.554 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.555 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'c75f7a03-7c2c-5a1f-a96a-1fe9cc39468a', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:47 compute-0 nova_compute[117331]: 2025-10-09 16:10:47.560 2 INFO oslo.privsep.daemon [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp6ho9n4ta/privsep.sock']
Oct 09 16:10:48 compute-0 nova_compute[117331]: 2025-10-09 16:10:48.316 2 INFO oslo.privsep.daemon [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Spawned new privsep daemon via rootwrap
Oct 09 16:10:48 compute-0 nova_compute[117331]: 2025-10-09 16:10:48.140 85 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 09 16:10:48 compute-0 nova_compute[117331]: 2025-10-09 16:10:48.144 85 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 09 16:10:48 compute-0 nova_compute[117331]: 2025-10-09 16:10:48.146 85 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Oct 09 16:10:48 compute-0 nova_compute[117331]: 2025-10-09 16:10:48.146 85 INFO oslo.privsep.daemon [-] privsep daemon running as pid 85
Oct 09 16:10:48 compute-0 nova_compute[117331]: 2025-10-09 16:10:48.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:48 compute-0 nova_compute[117331]: 2025-10-09 16:10:48.595 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap623a49d8-b0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:10:48 compute-0 nova_compute[117331]: 2025-10-09 16:10:48.596 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap623a49d8-b0, col_values=(('qos', UUID('2c6070f6-f5f0-44d7-8076-d823fa8d4078')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:10:48 compute-0 nova_compute[117331]: 2025-10-09 16:10:48.596 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap623a49d8-b0, col_values=(('external_ids', {'iface-id': '623a49d8-b0af-4032-871e-8d96af50c4af', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:41:62', 'vm-uuid': '42539348-4e24-48c0-9522-d27691bb1247'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:10:48 compute-0 nova_compute[117331]: 2025-10-09 16:10:48.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:48 compute-0 NetworkManager[1028]: <info>  [1760026248.6446] manager: (tap623a49d8-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Oct 09 16:10:48 compute-0 nova_compute[117331]: 2025-10-09 16:10:48.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:10:48 compute-0 nova_compute[117331]: 2025-10-09 16:10:48.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:48 compute-0 nova_compute[117331]: 2025-10-09 16:10:48.653 2 INFO os_vif [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:41:62,bridge_name='br-int',has_traffic_filtering=True,id=623a49d8-b0af-4032-871e-8d96af50c4af,network=Network(6eccf890-0637-4859-95f7-03ee0bf9c504),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap623a49d8-b0')
Oct 09 16:10:49 compute-0 sshd-session[140947]: Failed password for invalid user kingbase from 134.199.199.215 port 35810 ssh2
Oct 09 16:10:49 compute-0 sshd-session[140959]: Invalid user sonar from 134.199.199.215 port 35822
Oct 09 16:10:50 compute-0 sshd-session[140959]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:10:50 compute-0 sshd-session[140959]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:10:50 compute-0 podman[140961]: 2025-10-09 16:10:50.045734839 +0000 UTC m=+0.043224682 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 09 16:10:50 compute-0 podman[140962]: 2025-10-09 16:10:50.082103001 +0000 UTC m=+0.078008795 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 09 16:10:50 compute-0 nova_compute[117331]: 2025-10-09 16:10:50.193 2 DEBUG nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:10:50 compute-0 nova_compute[117331]: 2025-10-09 16:10:50.193 2 DEBUG nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:10:50 compute-0 nova_compute[117331]: 2025-10-09 16:10:50.193 2 DEBUG nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] No VIF found with MAC fa:16:3e:b0:41:62, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:10:50 compute-0 nova_compute[117331]: 2025-10-09 16:10:50.194 2 INFO nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Using config drive
Oct 09 16:10:50 compute-0 nova_compute[117331]: 2025-10-09 16:10:50.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:50 compute-0 nova_compute[117331]: 2025-10-09 16:10:50.704 2 WARNING neutronclient.v2_0.client [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:10:50 compute-0 nova_compute[117331]: 2025-10-09 16:10:50.860 2 INFO nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Creating config drive at /var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/disk.config
Oct 09 16:10:50 compute-0 nova_compute[117331]: 2025-10-09 16:10:50.865 2 DEBUG oslo_concurrency.processutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpzfhyf0wa execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:10:51 compute-0 nova_compute[117331]: 2025-10-09 16:10:51.004 2 DEBUG oslo_concurrency.processutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpzfhyf0wa" returned: 0 in 0.140s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:10:51 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 09 16:10:51 compute-0 kernel: tap623a49d8-b0: entered promiscuous mode
Oct 09 16:10:51 compute-0 NetworkManager[1028]: <info>  [1760026251.0737] manager: (tap623a49d8-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/23)
Oct 09 16:10:51 compute-0 nova_compute[117331]: 2025-10-09 16:10:51.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:51 compute-0 ovn_controller[19752]: 2025-10-09T16:10:51Z|00040|binding|INFO|Claiming lport 623a49d8-b0af-4032-871e-8d96af50c4af for this chassis.
Oct 09 16:10:51 compute-0 ovn_controller[19752]: 2025-10-09T16:10:51Z|00041|binding|INFO|623a49d8-b0af-4032-871e-8d96af50c4af: Claiming fa:16:3e:b0:41:62 10.100.0.5
Oct 09 16:10:51 compute-0 nova_compute[117331]: 2025-10-09 16:10:51.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:51.090 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:41:62 10.100.0.5'], port_security=['fa:16:3e:b0:41:62 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '42539348-4e24-48c0-9522-d27691bb1247', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6eccf890-0637-4859-95f7-03ee0bf9c504', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb9cfe2a59d34561b954fd278bc3bf0a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4a9ac4bd-b9f3-4675-b61d-cd6433a07d9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e6fcc7f-9456-459b-adfd-2fc005d8b3e6, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=623a49d8-b0af-4032-871e-8d96af50c4af) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:10:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:51.091 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 623a49d8-b0af-4032-871e-8d96af50c4af in datapath 6eccf890-0637-4859-95f7-03ee0bf9c504 bound to our chassis
Oct 09 16:10:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:51.092 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6eccf890-0637-4859-95f7-03ee0bf9c504
Oct 09 16:10:51 compute-0 systemd-udevd[141017]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:10:51 compute-0 NetworkManager[1028]: <info>  [1760026251.1141] device (tap623a49d8-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:10:51 compute-0 NetworkManager[1028]: <info>  [1760026251.1151] device (tap623a49d8-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:10:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:51.121 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[0de8ee00-9a61-4982-919d-277bcd33a813]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:51.122 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6eccf890-01 in ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 09 16:10:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:51.126 139687 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6eccf890-00 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 09 16:10:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:51.126 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[35e5fc0d-0f19-4392-89bf-44f3be6fab05]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:51.127 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[95481424-b14e-4563-97b5-77dd09ccdcb8]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:51.138 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[cbfb8115-b219-47c8-ba62-079577a4de0e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:51 compute-0 systemd-machined[77487]: New machine qemu-1-instance-00000001.
Oct 09 16:10:51 compute-0 sshd-session[140947]: Connection closed by invalid user kingbase 134.199.199.215 port 35810 [preauth]
Oct 09 16:10:51 compute-0 ovn_controller[19752]: 2025-10-09T16:10:51Z|00042|binding|INFO|Setting lport 623a49d8-b0af-4032-871e-8d96af50c4af ovn-installed in OVS
Oct 09 16:10:51 compute-0 ovn_controller[19752]: 2025-10-09T16:10:51Z|00043|binding|INFO|Setting lport 623a49d8-b0af-4032-871e-8d96af50c4af up in Southbound
Oct 09 16:10:51 compute-0 nova_compute[117331]: 2025-10-09 16:10:51.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:51 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct 09 16:10:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:51.208 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ae91593b-69a1-43c3-9dd2-22ca6d4ad115]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:51.210 28613 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpnnz1nf0h/privsep.sock']
Oct 09 16:10:51 compute-0 nova_compute[117331]: 2025-10-09 16:10:51.934 2 DEBUG nova.compute.manager [req-3c8c161b-d799-406f-9aac-e4a3830098eb req-e4bd80a8-84c1-483a-b355-6597bd022a41 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Received event network-vif-plugged-623a49d8-b0af-4032-871e-8d96af50c4af external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:10:51 compute-0 nova_compute[117331]: 2025-10-09 16:10:51.935 2 DEBUG oslo_concurrency.lockutils [req-3c8c161b-d799-406f-9aac-e4a3830098eb req-e4bd80a8-84c1-483a-b355-6597bd022a41 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "42539348-4e24-48c0-9522-d27691bb1247-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:10:51 compute-0 nova_compute[117331]: 2025-10-09 16:10:51.935 2 DEBUG oslo_concurrency.lockutils [req-3c8c161b-d799-406f-9aac-e4a3830098eb req-e4bd80a8-84c1-483a-b355-6597bd022a41 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "42539348-4e24-48c0-9522-d27691bb1247-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:10:51 compute-0 nova_compute[117331]: 2025-10-09 16:10:51.935 2 DEBUG oslo_concurrency.lockutils [req-3c8c161b-d799-406f-9aac-e4a3830098eb req-e4bd80a8-84c1-483a-b355-6597bd022a41 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "42539348-4e24-48c0-9522-d27691bb1247-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:10:51 compute-0 nova_compute[117331]: 2025-10-09 16:10:51.935 2 DEBUG nova.compute.manager [req-3c8c161b-d799-406f-9aac-e4a3830098eb req-e4bd80a8-84c1-483a-b355-6597bd022a41 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Processing event network-vif-plugged-623a49d8-b0af-4032-871e-8d96af50c4af _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:10:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:51.984 28613 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 09 16:10:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:51.985 28613 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpnnz1nf0h/privsep.sock __init__ /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:377
Oct 09 16:10:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:51.823 141048 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 09 16:10:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:51.828 141048 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 09 16:10:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:51.829 141048 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 09 16:10:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:51.830 141048 INFO oslo.privsep.daemon [-] privsep daemon running as pid 141048
Oct 09 16:10:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:51.986 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[8d136595-8831-4fa6-88db-f453b033c3a6]: (2,) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:52 compute-0 sshd-session[140959]: Failed password for invalid user sonar from 134.199.199.215 port 35822 ssh2
Oct 09 16:10:52 compute-0 nova_compute[117331]: 2025-10-09 16:10:52.053 2 DEBUG nova.compute.manager [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:10:52 compute-0 nova_compute[117331]: 2025-10-09 16:10:52.064 2 DEBUG nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:10:52 compute-0 nova_compute[117331]: 2025-10-09 16:10:52.067 2 INFO nova.virt.libvirt.driver [-] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Instance spawned successfully.
Oct 09 16:10:52 compute-0 nova_compute[117331]: 2025-10-09 16:10:52.068 2 DEBUG nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:10:52 compute-0 sshd-session[140959]: Connection closed by invalid user sonar 134.199.199.215 port 35822 [preauth]
Oct 09 16:10:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:52.463 141048 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:10:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:52.463 141048 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:10:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:52.463 141048 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:10:52 compute-0 nova_compute[117331]: 2025-10-09 16:10:52.578 2 DEBUG nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:10:52 compute-0 nova_compute[117331]: 2025-10-09 16:10:52.578 2 DEBUG nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:10:52 compute-0 nova_compute[117331]: 2025-10-09 16:10:52.579 2 DEBUG nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:10:52 compute-0 nova_compute[117331]: 2025-10-09 16:10:52.579 2 DEBUG nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:10:52 compute-0 nova_compute[117331]: 2025-10-09 16:10:52.580 2 DEBUG nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:10:52 compute-0 nova_compute[117331]: 2025-10-09 16:10:52.580 2 DEBUG nova.virt.libvirt.driver [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:10:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:52.970 141048 INFO oslo_service.backend [-] Loading backend: eventlet
Oct 09 16:10:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:52.976 141048 INFO oslo_service.backend [-] Backend 'eventlet' successfully loaded and cached.
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.056 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[cba7506a-a0e6-4b87-9a2f-5d6959aff137]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.060 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a923e8d9-589d-4d99-8c4e-4b033e680fc5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:53 compute-0 NetworkManager[1028]: <info>  [1760026253.0617] manager: (tap6eccf890-00): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.088 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[debfe41b-9893-42f5-996e-de389cb88975]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.091 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[d61205aa-8bc9-4efe-9882-e5495ee33de7]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:53 compute-0 nova_compute[117331]: 2025-10-09 16:10:53.097 2 INFO nova.compute.manager [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Took 11.87 seconds to spawn the instance on the hypervisor.
Oct 09 16:10:53 compute-0 nova_compute[117331]: 2025-10-09 16:10:53.098 2 DEBUG nova.compute.manager [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:10:53 compute-0 NetworkManager[1028]: <info>  [1760026253.1137] device (tap6eccf890-00): carrier: link connected
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.119 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[b413038b-6451-4f95-b5fd-e6b2c23a6793]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.139 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[42e720b2-ef53-4f26-9130-ab9efc7ac878]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6eccf890-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:72:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 119671, 'reachable_time': 40699, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 141071, 'error': None, 'target': 'ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.157 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[6a07b068-8040-41f3-abda-e273b419486c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe93:7252'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 119671, 'tstamp': 119671}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 141072, 'error': None, 'target': 'ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.175 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[57c0fba7-7784-46c6-98a8-5d63ae607192]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6eccf890-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:72:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 119671, 'reachable_time': 40699, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 141073, 'error': None, 'target': 'ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.208 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d65e8ea6-cad4-4ca7-990a-83542be0d6ed]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.298 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d0413858-8292-4713-a5b2-620e11964482]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.300 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6eccf890-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.301 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.302 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6eccf890-00, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:10:53 compute-0 NetworkManager[1028]: <info>  [1760026253.3047] manager: (tap6eccf890-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct 09 16:10:53 compute-0 kernel: tap6eccf890-00: entered promiscuous mode
Oct 09 16:10:53 compute-0 nova_compute[117331]: 2025-10-09 16:10:53.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:53 compute-0 nova_compute[117331]: 2025-10-09 16:10:53.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.307 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6eccf890-00, col_values=(('external_ids', {'iface-id': '651a592f-942d-4927-af62-51b72afdc9b7'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:10:53 compute-0 ovn_controller[19752]: 2025-10-09T16:10:53Z|00044|binding|INFO|Releasing lport 651a592f-942d-4927-af62-51b72afdc9b7 from this chassis (sb_readonly=0)
Oct 09 16:10:53 compute-0 nova_compute[117331]: 2025-10-09 16:10:53.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:53 compute-0 nova_compute[117331]: 2025-10-09 16:10:53.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.320 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[198942f8-1866-45c9-b7bb-31481a154ded]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.321 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6eccf890-0637-4859-95f7-03ee0bf9c504.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6eccf890-0637-4859-95f7-03ee0bf9c504.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.321 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6eccf890-0637-4859-95f7-03ee0bf9c504.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6eccf890-0637-4859-95f7-03ee0bf9c504.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.321 28613 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 6eccf890-0637-4859-95f7-03ee0bf9c504 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.322 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6eccf890-0637-4859-95f7-03ee0bf9c504.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6eccf890-0637-4859-95f7-03ee0bf9c504.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.322 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[4181416f-0bfa-45d4-a78e-dd4da3e2874a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.322 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6eccf890-0637-4859-95f7-03ee0bf9c504.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6eccf890-0637-4859-95f7-03ee0bf9c504.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.323 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[43e2e4ad-7c9d-47c3-8df8-f430562b1ccc]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.323 28613 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: global
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     log         /dev/log local0 debug
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     log-tag     haproxy-metadata-proxy-6eccf890-0637-4859-95f7-03ee0bf9c504
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     user        root
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     group       root
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     maxconn     1024
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     pidfile     /var/lib/neutron/external/pids/6eccf890-0637-4859-95f7-03ee0bf9c504.pid.haproxy
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     daemon
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: defaults
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     log global
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     mode http
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     option httplog
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     option dontlognull
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     option http-server-close
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     option forwardfor
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     retries                 3
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     timeout http-request    30s
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     timeout connect         30s
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     timeout client          32s
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     timeout server          32s
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     timeout http-keep-alive 30s
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: listen listener
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     bind 169.254.169.254:80
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:     http-request add-header X-OVN-Network-ID 6eccf890-0637-4859-95f7-03ee0bf9c504
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 09 16:10:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:10:53.326 28613 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504', 'env', 'PROCESS_TAG=haproxy-6eccf890-0637-4859-95f7-03ee0bf9c504', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6eccf890-0637-4859-95f7-03ee0bf9c504.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 09 16:10:53 compute-0 unix_chkpwd[141086]: password check failed for user (root)
Oct 09 16:10:53 compute-0 sshd-session[141076]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:10:53 compute-0 nova_compute[117331]: 2025-10-09 16:10:53.633 2 INFO nova.compute.manager [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Took 17.14 seconds to build instance.
Oct 09 16:10:53 compute-0 nova_compute[117331]: 2025-10-09 16:10:53.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:53 compute-0 podman[141109]: 2025-10-09 16:10:53.682822979 +0000 UTC m=+0.043827807 container create 25aace54772efbca4207c850c5e1f9d989f40b105acc0ca18f8723c71c4dfef4 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 09 16:10:53 compute-0 systemd[1]: Started libpod-conmon-25aace54772efbca4207c850c5e1f9d989f40b105acc0ca18f8723c71c4dfef4.scope.
Oct 09 16:10:53 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24db9c447ea08a9db34b730071edaf403cb7a495aab2901528a9d26b7d49308a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 16:10:53 compute-0 podman[141109]: 2025-10-09 16:10:53.65924972 +0000 UTC m=+0.020254568 image pull 4e15c189a95c5dcd1203c76fe304de5c8f8616da9a258ca168cc54a517f49b87 38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 09 16:10:53 compute-0 podman[141109]: 2025-10-09 16:10:53.771027312 +0000 UTC m=+0.132032150 container init 25aace54772efbca4207c850c5e1f9d989f40b105acc0ca18f8723c71c4dfef4 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=watcher_latest)
Oct 09 16:10:53 compute-0 podman[141109]: 2025-10-09 16:10:53.776070085 +0000 UTC m=+0.137074913 container start 25aace54772efbca4207c850c5e1f9d989f40b105acc0ca18f8723c71c4dfef4 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2)
Oct 09 16:10:53 compute-0 neutron-haproxy-ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504[141125]: [NOTICE]   (141129) : New worker (141131) forked
Oct 09 16:10:53 compute-0 neutron-haproxy-ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504[141125]: [NOTICE]   (141129) : Loading success.
Oct 09 16:10:53 compute-0 nova_compute[117331]: 2025-10-09 16:10:53.985 2 DEBUG nova.compute.manager [req-538feedd-ca90-4a73-8269-99b9449434f7 req-ad25928a-063a-47c5-b1ab-47c151cfec23 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Received event network-vif-plugged-623a49d8-b0af-4032-871e-8d96af50c4af external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:10:53 compute-0 nova_compute[117331]: 2025-10-09 16:10:53.985 2 DEBUG oslo_concurrency.lockutils [req-538feedd-ca90-4a73-8269-99b9449434f7 req-ad25928a-063a-47c5-b1ab-47c151cfec23 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "42539348-4e24-48c0-9522-d27691bb1247-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:10:53 compute-0 nova_compute[117331]: 2025-10-09 16:10:53.986 2 DEBUG oslo_concurrency.lockutils [req-538feedd-ca90-4a73-8269-99b9449434f7 req-ad25928a-063a-47c5-b1ab-47c151cfec23 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "42539348-4e24-48c0-9522-d27691bb1247-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:10:53 compute-0 nova_compute[117331]: 2025-10-09 16:10:53.986 2 DEBUG oslo_concurrency.lockutils [req-538feedd-ca90-4a73-8269-99b9449434f7 req-ad25928a-063a-47c5-b1ab-47c151cfec23 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "42539348-4e24-48c0-9522-d27691bb1247-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:10:53 compute-0 nova_compute[117331]: 2025-10-09 16:10:53.986 2 DEBUG nova.compute.manager [req-538feedd-ca90-4a73-8269-99b9449434f7 req-ad25928a-063a-47c5-b1ab-47c151cfec23 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] No waiting events found dispatching network-vif-plugged-623a49d8-b0af-4032-871e-8d96af50c4af pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:10:53 compute-0 nova_compute[117331]: 2025-10-09 16:10:53.986 2 WARNING nova.compute.manager [req-538feedd-ca90-4a73-8269-99b9449434f7 req-ad25928a-063a-47c5-b1ab-47c151cfec23 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Received unexpected event network-vif-plugged-623a49d8-b0af-4032-871e-8d96af50c4af for instance with vm_state active and task_state None.
Oct 09 16:10:54 compute-0 nova_compute[117331]: 2025-10-09 16:10:54.139 2 DEBUG oslo_concurrency.lockutils [None req-032ce1c1-3f29-43e1-9fe6-906ee67ede49 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "42539348-4e24-48c0-9522-d27691bb1247" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.658s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:10:55 compute-0 nova_compute[117331]: 2025-10-09 16:10:55.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:55 compute-0 sshd-session[141076]: Failed password for root from 134.199.199.215 port 35836 ssh2
Oct 09 16:10:55 compute-0 sshd-session[141076]: Connection closed by authenticating user root 134.199.199.215 port 35836 [preauth]
Oct 09 16:10:56 compute-0 sshd-session[141140]: Invalid user oscar from 134.199.199.215 port 46212
Oct 09 16:10:56 compute-0 sshd-session[141140]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:10:56 compute-0 sshd-session[141140]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:10:58 compute-0 nova_compute[117331]: 2025-10-09 16:10:58.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:10:58 compute-0 podman[141142]: 2025-10-09 16:10:58.828658855 +0000 UTC m=+0.052592111 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, architecture=x86_64, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.buildah.version=1.33.7)
Oct 09 16:10:59 compute-0 sshd-session[141140]: Failed password for invalid user oscar from 134.199.199.215 port 46212 ssh2
Oct 09 16:10:59 compute-0 sshd-session[141164]: Connection closed by 36.224.53.32 port 44818
Oct 09 16:10:59 compute-0 podman[127775]: time="2025-10-09T16:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:10:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:10:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3479 "" "Go-http-client/1.1"
Oct 09 16:11:00 compute-0 sshd-session[141165]: Invalid user elastic from 134.199.199.215 port 46228
Oct 09 16:11:00 compute-0 sshd-session[141165]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:11:00 compute-0 sshd-session[141165]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:11:00 compute-0 sshd-session[141140]: Connection closed by invalid user oscar 134.199.199.215 port 46212 [preauth]
Oct 09 16:11:00 compute-0 nova_compute[117331]: 2025-10-09 16:11:00.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:01 compute-0 openstack_network_exporter[129925]: ERROR   16:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:11:01 compute-0 openstack_network_exporter[129925]: ERROR   16:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:11:01 compute-0 openstack_network_exporter[129925]: ERROR   16:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:11:01 compute-0 openstack_network_exporter[129925]: ERROR   16:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:11:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:11:01 compute-0 openstack_network_exporter[129925]: ERROR   16:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:11:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:11:01 compute-0 podman[141168]: 2025-10-09 16:11:01.854425881 +0000 UTC m=+0.084774101 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, container_name=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:11:02 compute-0 sshd-session[141165]: Failed password for invalid user elastic from 134.199.199.215 port 46228 ssh2
Oct 09 16:11:03 compute-0 ovn_controller[19752]: 2025-10-09T16:11:03Z|00003|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b0:41:62 10.100.0.5
Oct 09 16:11:03 compute-0 ovn_controller[19752]: 2025-10-09T16:11:03Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b0:41:62 10.100.0.5
Oct 09 16:11:03 compute-0 sshd-session[141206]: Invalid user ubuntu from 134.199.199.215 port 46234
Oct 09 16:11:03 compute-0 sshd-session[141206]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:11:03 compute-0 sshd-session[141206]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:11:03 compute-0 nova_compute[117331]: 2025-10-09 16:11:03.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:04 compute-0 nova_compute[117331]: 2025-10-09 16:11:04.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:11:04 compute-0 sshd-session[141165]: Connection closed by invalid user elastic 134.199.199.215 port 46228 [preauth]
Oct 09 16:11:05 compute-0 nova_compute[117331]: 2025-10-09 16:11:05.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:11:05 compute-0 nova_compute[117331]: 2025-10-09 16:11:05.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:05 compute-0 sshd-session[141206]: Failed password for invalid user ubuntu from 134.199.199.215 port 46234 ssh2
Oct 09 16:11:06 compute-0 nova_compute[117331]: 2025-10-09 16:11:06.302 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:11:06 compute-0 sshd-session[141208]: Invalid user server from 134.199.199.215 port 34276
Oct 09 16:11:06 compute-0 sshd-session[141208]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:11:06 compute-0 sshd-session[141208]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:11:07 compute-0 sshd-session[141206]: Connection closed by invalid user ubuntu 134.199.199.215 port 46234 [preauth]
Oct 09 16:11:07 compute-0 nova_compute[117331]: 2025-10-09 16:11:07.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:11:07 compute-0 nova_compute[117331]: 2025-10-09 16:11:07.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:11:07 compute-0 nova_compute[117331]: 2025-10-09 16:11:07.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:11:07 compute-0 nova_compute[117331]: 2025-10-09 16:11:07.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:11:07 compute-0 nova_compute[117331]: 2025-10-09 16:11:07.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:11:07 compute-0 nova_compute[117331]: 2025-10-09 16:11:07.869 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:11:07 compute-0 nova_compute[117331]: 2025-10-09 16:11:07.869 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:11:07 compute-0 nova_compute[117331]: 2025-10-09 16:11:07.869 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:11:07 compute-0 nova_compute[117331]: 2025-10-09 16:11:07.869 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:11:08 compute-0 nova_compute[117331]: 2025-10-09 16:11:08.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:08 compute-0 nova_compute[117331]: 2025-10-09 16:11:08.660 2 DEBUG oslo_concurrency.lockutils [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Acquiring lock "42539348-4e24-48c0-9522-d27691bb1247" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:11:08 compute-0 nova_compute[117331]: 2025-10-09 16:11:08.661 2 DEBUG oslo_concurrency.lockutils [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "42539348-4e24-48c0-9522-d27691bb1247" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:11:08 compute-0 nova_compute[117331]: 2025-10-09 16:11:08.661 2 DEBUG oslo_concurrency.lockutils [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Acquiring lock "42539348-4e24-48c0-9522-d27691bb1247-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:11:08 compute-0 nova_compute[117331]: 2025-10-09 16:11:08.661 2 DEBUG oslo_concurrency.lockutils [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "42539348-4e24-48c0-9522-d27691bb1247-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:11:08 compute-0 nova_compute[117331]: 2025-10-09 16:11:08.661 2 DEBUG oslo_concurrency.lockutils [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "42539348-4e24-48c0-9522-d27691bb1247-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:11:08 compute-0 nova_compute[117331]: 2025-10-09 16:11:08.671 2 INFO nova.compute.manager [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Terminating instance
Oct 09 16:11:08 compute-0 sshd-session[141208]: Failed password for invalid user server from 134.199.199.215 port 34276 ssh2
Oct 09 16:11:08 compute-0 nova_compute[117331]: 2025-10-09 16:11:08.915 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:11:08 compute-0 nova_compute[117331]: 2025-10-09 16:11:08.976 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:11:08 compute-0 nova_compute[117331]: 2025-10-09 16:11:08.977 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.032 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.175 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.176 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.184 2 DEBUG nova.compute.manager [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.194 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.017s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.194 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6018MB free_disk=73.24325561523438GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.194 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.195 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:11:09 compute-0 kernel: tap623a49d8-b0 (unregistering): left promiscuous mode
Oct 09 16:11:09 compute-0 NetworkManager[1028]: <info>  [1760026269.2177] device (tap623a49d8-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:09 compute-0 ovn_controller[19752]: 2025-10-09T16:11:09Z|00045|binding|INFO|Releasing lport 623a49d8-b0af-4032-871e-8d96af50c4af from this chassis (sb_readonly=0)
Oct 09 16:11:09 compute-0 ovn_controller[19752]: 2025-10-09T16:11:09Z|00046|binding|INFO|Setting lport 623a49d8-b0af-4032-871e-8d96af50c4af down in Southbound
Oct 09 16:11:09 compute-0 ovn_controller[19752]: 2025-10-09T16:11:09Z|00047|binding|INFO|Removing iface tap623a49d8-b0 ovn-installed in OVS
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:09.241 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:41:62 10.100.0.5'], port_security=['fa:16:3e:b0:41:62 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '42539348-4e24-48c0-9522-d27691bb1247', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6eccf890-0637-4859-95f7-03ee0bf9c504', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb9cfe2a59d34561b954fd278bc3bf0a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '4a9ac4bd-b9f3-4675-b61d-cd6433a07d9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e6fcc7f-9456-459b-adfd-2fc005d8b3e6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=623a49d8-b0af-4032-871e-8d96af50c4af) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:11:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:09.242 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 623a49d8-b0af-4032-871e-8d96af50c4af in datapath 6eccf890-0637-4859-95f7-03ee0bf9c504 unbound from our chassis
Oct 09 16:11:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:09.243 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6eccf890-0637-4859-95f7-03ee0bf9c504, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:11:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:09.243 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[df5d0bb3-c559-4b64-8bd7-e232025e525e]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:11:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:09.244 28613 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504 namespace which is not needed anymore
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:09 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct 09 16:11:09 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 12.137s CPU time.
Oct 09 16:11:09 compute-0 systemd-machined[77487]: Machine qemu-1-instance-00000001 terminated.
Oct 09 16:11:09 compute-0 neutron-haproxy-ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504[141125]: [NOTICE]   (141129) : haproxy version is 3.0.5-8e879a5
Oct 09 16:11:09 compute-0 podman[141242]: 2025-10-09 16:11:09.359704093 +0000 UTC m=+0.029896255 container kill 25aace54772efbca4207c850c5e1f9d989f40b105acc0ca18f8723c71c4dfef4 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4)
Oct 09 16:11:09 compute-0 neutron-haproxy-ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504[141125]: [NOTICE]   (141129) : path to executable is /usr/sbin/haproxy
Oct 09 16:11:09 compute-0 neutron-haproxy-ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504[141125]: [WARNING]  (141129) : Exiting Master process...
Oct 09 16:11:09 compute-0 neutron-haproxy-ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504[141125]: [ALERT]    (141129) : Current worker (141131) exited with code 143 (Terminated)
Oct 09 16:11:09 compute-0 neutron-haproxy-ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504[141125]: [WARNING]  (141129) : All workers exited. Exiting... (0)
Oct 09 16:11:09 compute-0 systemd[1]: libpod-25aace54772efbca4207c850c5e1f9d989f40b105acc0ca18f8723c71c4dfef4.scope: Deactivated successfully.
Oct 09 16:11:09 compute-0 sshd-session[141208]: Connection closed by invalid user server 134.199.199.215 port 34276 [preauth]
Oct 09 16:11:09 compute-0 podman[141257]: 2025-10-09 16:11:09.400242737 +0000 UTC m=+0.022180609 container died 25aace54772efbca4207c850c5e1f9d989f40b105acc0ca18f8723c71c4dfef4 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007)
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.434 2 INFO nova.virt.libvirt.driver [-] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Instance destroyed successfully.
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.434 2 DEBUG nova.objects.instance [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lazy-loading 'resources' on Instance uuid 42539348-4e24-48c0-9522-d27691bb1247 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:11:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-25aace54772efbca4207c850c5e1f9d989f40b105acc0ca18f8723c71c4dfef4-userdata-shm.mount: Deactivated successfully.
Oct 09 16:11:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-24db9c447ea08a9db34b730071edaf403cb7a495aab2901528a9d26b7d49308a-merged.mount: Deactivated successfully.
Oct 09 16:11:09 compute-0 podman[141257]: 2025-10-09 16:11:09.65875138 +0000 UTC m=+0.280689252 container cleanup 25aace54772efbca4207c850c5e1f9d989f40b105acc0ca18f8723c71c4dfef4 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 09 16:11:09 compute-0 systemd[1]: libpod-conmon-25aace54772efbca4207c850c5e1f9d989f40b105acc0ca18f8723c71c4dfef4.scope: Deactivated successfully.
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.679 2 DEBUG nova.compute.manager [req-95e3c398-2380-4d7b-836d-b1cf6f3464fd req-8158921f-fa11-48c8-8cf0-ef9073857d83 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Received event network-vif-unplugged-623a49d8-b0af-4032-871e-8d96af50c4af external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.679 2 DEBUG oslo_concurrency.lockutils [req-95e3c398-2380-4d7b-836d-b1cf6f3464fd req-8158921f-fa11-48c8-8cf0-ef9073857d83 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "42539348-4e24-48c0-9522-d27691bb1247-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.679 2 DEBUG oslo_concurrency.lockutils [req-95e3c398-2380-4d7b-836d-b1cf6f3464fd req-8158921f-fa11-48c8-8cf0-ef9073857d83 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "42539348-4e24-48c0-9522-d27691bb1247-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.680 2 DEBUG oslo_concurrency.lockutils [req-95e3c398-2380-4d7b-836d-b1cf6f3464fd req-8158921f-fa11-48c8-8cf0-ef9073857d83 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "42539348-4e24-48c0-9522-d27691bb1247-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.680 2 DEBUG nova.compute.manager [req-95e3c398-2380-4d7b-836d-b1cf6f3464fd req-8158921f-fa11-48c8-8cf0-ef9073857d83 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] No waiting events found dispatching network-vif-unplugged-623a49d8-b0af-4032-871e-8d96af50c4af pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.680 2 DEBUG nova.compute.manager [req-95e3c398-2380-4d7b-836d-b1cf6f3464fd req-8158921f-fa11-48c8-8cf0-ef9073857d83 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Received event network-vif-unplugged-623a49d8-b0af-4032-871e-8d96af50c4af for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:11:09 compute-0 podman[141259]: 2025-10-09 16:11:09.697977352 +0000 UTC m=+0.314255463 container remove 25aace54772efbca4207c850c5e1f9d989f40b105acc0ca18f8723c71c4dfef4 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 16:11:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:09.703 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[f918e29f-5186-4efa-9f6c-6a8036346dbc]: (4, ("Thu Oct  9 04:11:09 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504 (25aace54772efbca4207c850c5e1f9d989f40b105acc0ca18f8723c71c4dfef4)\n25aace54772efbca4207c850c5e1f9d989f40b105acc0ca18f8723c71c4dfef4\nThu Oct  9 04:11:09 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504 (25aace54772efbca4207c850c5e1f9d989f40b105acc0ca18f8723c71c4dfef4)\n25aace54772efbca4207c850c5e1f9d989f40b105acc0ca18f8723c71c4dfef4\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:11:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:09.704 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3b599077-f403-41b0-832a-65afd6b00756]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:11:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:09.705 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6eccf890-0637-4859-95f7-03ee0bf9c504.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6eccf890-0637-4859-95f7-03ee0bf9c504.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:11:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:09.705 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[12d699eb-a6ed-4266-8fb3-236b650b9a20]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:11:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:09.706 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6eccf890-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:09 compute-0 kernel: tap6eccf890-00: left promiscuous mode
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:09.724 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb7728d-4211-4897-a622-45e5be5ae096]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:11:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:09.756 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a52f96-858b-45c9-b0e1-048351476a94]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:11:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:09.757 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[00d69520-ebc7-4e7b-89de-ab86f5875fdf]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:11:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:09.768 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:09.774 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[662d3ce5-3445-4ea7-895a-c23453d1f18c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 119665, 'reachable_time': 20592, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 141315, 'error': None, 'target': 'ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:11:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d6eccf890\x2d0637\x2d4859\x2d95f7\x2d03ee0bf9c504.mount: Deactivated successfully.
Oct 09 16:11:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:09.778 28727 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6eccf890-0637-4859-95f7-03ee0bf9c504 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 09 16:11:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:09.779 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[edc7512f-dd31-48bc-995b-5a9ded7a3068]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:11:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:09.780 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.947 2 DEBUG nova.virt.libvirt.vif [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:10:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestContinuousAudit-server-377185348',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testcontinuousaudit-server-377185348',id=1,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:10:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cb9cfe2a59d34561b954fd278bc3bf0a',ramdisk_id='',reservation_id='r-9f9hvutc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestContinuousAudit-128857979',owner_user_name='tempest-TestContinuousAudit-128857979-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:10:53Z,user_data=None,user_id='79bd7ccf35e1491a9ecd7213db027ff8',uuid=42539348-4e24-48c0-9522-d27691bb1247,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "623a49d8-b0af-4032-871e-8d96af50c4af", "address": "fa:16:3e:b0:41:62", "network": {"id": "6eccf890-0637-4859-95f7-03ee0bf9c504", "bridge": "br-int", "label": "tempest-TestContinuousAudit-2145226265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f3fc733922848aa9ddc9d0813f8ba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623a49d8-b0", "ovs_interfaceid": "623a49d8-b0af-4032-871e-8d96af50c4af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.948 2 DEBUG nova.network.os_vif_util [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Converting VIF {"id": "623a49d8-b0af-4032-871e-8d96af50c4af", "address": "fa:16:3e:b0:41:62", "network": {"id": "6eccf890-0637-4859-95f7-03ee0bf9c504", "bridge": "br-int", "label": "tempest-TestContinuousAudit-2145226265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f3fc733922848aa9ddc9d0813f8ba80", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623a49d8-b0", "ovs_interfaceid": "623a49d8-b0af-4032-871e-8d96af50c4af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.950 2 DEBUG nova.network.os_vif_util [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:41:62,bridge_name='br-int',has_traffic_filtering=True,id=623a49d8-b0af-4032-871e-8d96af50c4af,network=Network(6eccf890-0637-4859-95f7-03ee0bf9c504),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap623a49d8-b0') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.950 2 DEBUG os_vif [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:41:62,bridge_name='br-int',has_traffic_filtering=True,id=623a49d8-b0af-4032-871e-8d96af50c4af,network=Network(6eccf890-0637-4859-95f7-03ee0bf9c504),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap623a49d8-b0') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.957 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap623a49d8-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.963 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=2c6070f6-f5f0-44d7-8076-d823fa8d4078) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.968 2 INFO os_vif [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:41:62,bridge_name='br-int',has_traffic_filtering=True,id=623a49d8-b0af-4032-871e-8d96af50c4af,network=Network(6eccf890-0637-4859-95f7-03ee0bf9c504),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap623a49d8-b0')
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.969 2 INFO nova.virt.libvirt.driver [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Deleting instance files /var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247_del
Oct 09 16:11:09 compute-0 nova_compute[117331]: 2025-10-09 16:11:09.969 2 INFO nova.virt.libvirt.driver [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Deletion of /var/lib/nova/instances/42539348-4e24-48c0-9522-d27691bb1247_del complete
Oct 09 16:11:10 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:34286 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.247 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance 42539348-4e24-48c0-9522-d27691bb1247 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.248 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.248 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:11:09 up 20 min,  0 user,  load average: 0.27, 0.27, 0.32\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_deleting': '1', 'num_os_type_None': '1', 'num_proj_cb9cfe2a59d34561b954fd278bc3bf0a': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.286 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating inventory in ProviderTree for provider 593051b8-2000-437f-a915-2616fc8b1671 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.484 2 INFO nova.compute.manager [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Took 1.30 seconds to destroy the instance on the hypervisor.
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.484 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.485 2 DEBUG nova.compute.manager [-] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.485 2 DEBUG nova.network.neutron [-] [instance: 42539348-4e24-48c0-9522-d27691bb1247] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.485 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.605 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.814 2 ERROR nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [req-139bf76d-0fbe-4016-902a-2fe2a0d9054c] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 593051b8-2000-437f-a915-2616fc8b1671.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-139bf76d-0fbe-4016-902a-2fe2a0d9054c"}]}
Oct 09 16:11:10 compute-0 podman[141318]: 2025-10-09 16:11:10.824445785 +0000 UTC m=+0.059619335 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.830 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing inventories for resource provider 593051b8-2000-437f-a915-2616fc8b1671 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.858 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating ProviderTree inventory for provider 593051b8-2000-437f-a915-2616fc8b1671 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.858 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating inventory in ProviderTree for provider 593051b8-2000-437f-a915-2616fc8b1671 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.875 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing aggregate associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.893 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing trait associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, traits: HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ARCH_X86_64,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_TIS,HW_ARCH_X86_64,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOUND_MODEL_AC97,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_RESCUE_BFV,COMPUTE_SOUND_MODEL_SB16,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.935 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating inventory in ProviderTree for provider 593051b8-2000-437f-a915-2616fc8b1671 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.978 2 DEBUG nova.compute.manager [req-508ccb00-b559-4fbd-a803-cd87e67a5b70 req-f1cc0f96-85b6-4345-9bd3-ace7428ee6d3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Received event network-vif-deleted-623a49d8-b0af-4032-871e-8d96af50c4af external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.978 2 INFO nova.compute.manager [req-508ccb00-b559-4fbd-a803-cd87e67a5b70 req-f1cc0f96-85b6-4345-9bd3-ace7428ee6d3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Neutron deleted interface 623a49d8-b0af-4032-871e-8d96af50c4af; detaching it from the instance and deleting it from the info cache
Oct 09 16:11:10 compute-0 nova_compute[117331]: 2025-10-09 16:11:10.978 2 DEBUG nova.network.neutron [req-508ccb00-b559-4fbd-a803-cd87e67a5b70 req-f1cc0f96-85b6-4345-9bd3-ace7428ee6d3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:11:11 compute-0 nova_compute[117331]: 2025-10-09 16:11:11.436 2 DEBUG nova.network.neutron [-] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:11:11 compute-0 nova_compute[117331]: 2025-10-09 16:11:11.472 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updated inventory for provider 593051b8-2000-437f-a915-2616fc8b1671 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:975
Oct 09 16:11:11 compute-0 nova_compute[117331]: 2025-10-09 16:11:11.472 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating resource provider 593051b8-2000-437f-a915-2616fc8b1671 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Oct 09 16:11:11 compute-0 nova_compute[117331]: 2025-10-09 16:11:11.472 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating inventory in ProviderTree for provider 593051b8-2000-437f-a915-2616fc8b1671 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 09 16:11:11 compute-0 nova_compute[117331]: 2025-10-09 16:11:11.485 2 DEBUG nova.compute.manager [req-508ccb00-b559-4fbd-a803-cd87e67a5b70 req-f1cc0f96-85b6-4345-9bd3-ace7428ee6d3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Detach interface failed, port_id=623a49d8-b0af-4032-871e-8d96af50c4af, reason: Instance 42539348-4e24-48c0-9522-d27691bb1247 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 09 16:11:11 compute-0 nova_compute[117331]: 2025-10-09 16:11:11.738 2 DEBUG nova.compute.manager [req-b41939e8-b151-4fad-9e7f-36991d1f83db req-8c0b6ae8-e9cf-4578-a43a-8a83c490cb98 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Received event network-vif-unplugged-623a49d8-b0af-4032-871e-8d96af50c4af external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:11:11 compute-0 nova_compute[117331]: 2025-10-09 16:11:11.738 2 DEBUG oslo_concurrency.lockutils [req-b41939e8-b151-4fad-9e7f-36991d1f83db req-8c0b6ae8-e9cf-4578-a43a-8a83c490cb98 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "42539348-4e24-48c0-9522-d27691bb1247-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:11:11 compute-0 nova_compute[117331]: 2025-10-09 16:11:11.738 2 DEBUG oslo_concurrency.lockutils [req-b41939e8-b151-4fad-9e7f-36991d1f83db req-8c0b6ae8-e9cf-4578-a43a-8a83c490cb98 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "42539348-4e24-48c0-9522-d27691bb1247-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:11:11 compute-0 nova_compute[117331]: 2025-10-09 16:11:11.739 2 DEBUG oslo_concurrency.lockutils [req-b41939e8-b151-4fad-9e7f-36991d1f83db req-8c0b6ae8-e9cf-4578-a43a-8a83c490cb98 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "42539348-4e24-48c0-9522-d27691bb1247-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:11:11 compute-0 nova_compute[117331]: 2025-10-09 16:11:11.739 2 DEBUG nova.compute.manager [req-b41939e8-b151-4fad-9e7f-36991d1f83db req-8c0b6ae8-e9cf-4578-a43a-8a83c490cb98 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] No waiting events found dispatching network-vif-unplugged-623a49d8-b0af-4032-871e-8d96af50c4af pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:11:11 compute-0 nova_compute[117331]: 2025-10-09 16:11:11.739 2 DEBUG nova.compute.manager [req-b41939e8-b151-4fad-9e7f-36991d1f83db req-8c0b6ae8-e9cf-4578-a43a-8a83c490cb98 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Received event network-vif-unplugged-623a49d8-b0af-4032-871e-8d96af50c4af for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:11:11 compute-0 nova_compute[117331]: 2025-10-09 16:11:11.943 2 INFO nova.compute.manager [-] [instance: 42539348-4e24-48c0-9522-d27691bb1247] Took 1.46 seconds to deallocate network for instance.
Oct 09 16:11:11 compute-0 nova_compute[117331]: 2025-10-09 16:11:11.981 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:11:11 compute-0 nova_compute[117331]: 2025-10-09 16:11:11.981 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.787s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:11:12 compute-0 nova_compute[117331]: 2025-10-09 16:11:12.461 2 DEBUG oslo_concurrency.lockutils [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:11:12 compute-0 nova_compute[117331]: 2025-10-09 16:11:12.462 2 DEBUG oslo_concurrency.lockutils [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:11:12 compute-0 nova_compute[117331]: 2025-10-09 16:11:12.541 2 DEBUG nova.compute.provider_tree [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:11:12 compute-0 nova_compute[117331]: 2025-10-09 16:11:12.982 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:11:13 compute-0 nova_compute[117331]: 2025-10-09 16:11:13.053 2 DEBUG nova.scheduler.client.report [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:11:13 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:34300 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:11:13 compute-0 nova_compute[117331]: 2025-10-09 16:11:13.679 2 DEBUG oslo_concurrency.lockutils [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.217s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:11:13 compute-0 nova_compute[117331]: 2025-10-09 16:11:13.699 2 INFO nova.scheduler.client.report [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Deleted allocations for instance 42539348-4e24-48c0-9522-d27691bb1247
Oct 09 16:11:14 compute-0 nova_compute[117331]: 2025-10-09 16:11:14.729 2 DEBUG oslo_concurrency.lockutils [None req-274e4d15-c5a5-44f5-8954-1c4eeedf7ef8 79bd7ccf35e1491a9ecd7213db027ff8 cb9cfe2a59d34561b954fd278bc3bf0a - - default default] Lock "42539348-4e24-48c0-9522-d27691bb1247" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.068s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:11:14 compute-0 nova_compute[117331]: 2025-10-09 16:11:14.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:15 compute-0 nova_compute[117331]: 2025-10-09 16:11:15.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:16 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:45122 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:11:16 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:16.781 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:11:16 compute-0 podman[141338]: 2025-10-09 16:11:16.82713076 +0000 UTC m=+0.053315573 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 09 16:11:19 compute-0 nova_compute[117331]: 2025-10-09 16:11:19.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:20 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:45138 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:11:20 compute-0 nova_compute[117331]: 2025-10-09 16:11:20.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:20 compute-0 podman[141362]: 2025-10-09 16:11:20.812037548 +0000 UTC m=+0.048955384 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:11:20 compute-0 podman[141363]: 2025-10-09 16:11:20.849470503 +0000 UTC m=+0.082265168 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:11:21 compute-0 nova_compute[117331]: 2025-10-09 16:11:21.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:23 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:45142 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:11:25 compute-0 nova_compute[117331]: 2025-10-09 16:11:24.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:25 compute-0 nova_compute[117331]: 2025-10-09 16:11:25.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:26 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:38262 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:11:29 compute-0 podman[127775]: time="2025-10-09T16:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:11:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:11:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3007 "" "Go-http-client/1.1"
Oct 09 16:11:29 compute-0 podman[141402]: 2025-10-09 16:11:29.830357965 +0000 UTC m=+0.058336613 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, distribution-scope=public, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git)
Oct 09 16:11:30 compute-0 nova_compute[117331]: 2025-10-09 16:11:30.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:30 compute-0 nova_compute[117331]: 2025-10-09 16:11:30.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:30 compute-0 unix_chkpwd[141426]: password check failed for user (root)
Oct 09 16:11:30 compute-0 sshd-session[141424]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:11:31 compute-0 openstack_network_exporter[129925]: ERROR   16:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:11:31 compute-0 openstack_network_exporter[129925]: ERROR   16:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:11:31 compute-0 openstack_network_exporter[129925]: ERROR   16:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:11:31 compute-0 openstack_network_exporter[129925]: ERROR   16:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:11:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:11:31 compute-0 openstack_network_exporter[129925]: ERROR   16:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:11:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:11:32 compute-0 sshd-session[141424]: Failed password for root from 134.199.199.215 port 38278 ssh2
Oct 09 16:11:32 compute-0 podman[141427]: 2025-10-09 16:11:32.863351252 +0000 UTC m=+0.099398884 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 09 16:11:33 compute-0 unix_chkpwd[141457]: password check failed for user (root)
Oct 09 16:11:33 compute-0 sshd-session[141455]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:11:34 compute-0 sshd-session[141424]: Connection closed by authenticating user root 134.199.199.215 port 38278 [preauth]
Oct 09 16:11:35 compute-0 nova_compute[117331]: 2025-10-09 16:11:35.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:35.281 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:11:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:35.282 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:11:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:35.282 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:11:35 compute-0 nova_compute[117331]: 2025-10-09 16:11:35.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:36 compute-0 sshd-session[141455]: Failed password for root from 134.199.199.215 port 38284 ssh2
Oct 09 16:11:37 compute-0 sshd-session[141459]: Invalid user steam from 134.199.199.215 port 45008
Oct 09 16:11:37 compute-0 sshd-session[141459]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:11:37 compute-0 sshd-session[141459]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:11:38 compute-0 sshd-session[141455]: Connection closed by authenticating user root 134.199.199.215 port 38284 [preauth]
Oct 09 16:11:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:38.305 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:8f:8a 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-19256da9-441e-477f-8195-1eea19c08437', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19256da9-441e-477f-8195-1eea19c08437', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be0f7b79aa964c70b49cd8e6a20ecdf5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a1624b0-a5ab-47cc-ba86-f82fec9b1800, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a5b52431-d630-4de3-8ed8-9c5d94f8bb93) old=Port_Binding(mac=['fa:16:3e:12:8f:8a'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-19256da9-441e-477f-8195-1eea19c08437', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19256da9-441e-477f-8195-1eea19c08437', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be0f7b79aa964c70b49cd8e6a20ecdf5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:11:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:38.306 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a5b52431-d630-4de3-8ed8-9c5d94f8bb93 in datapath 19256da9-441e-477f-8195-1eea19c08437 updated
Oct 09 16:11:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:38.308 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 19256da9-441e-477f-8195-1eea19c08437, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:11:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:38.309 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[b6af55d2-c1d0-430c-9176-34b55dde5657]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:11:38 compute-0 sshd-session[141459]: Failed password for invalid user steam from 134.199.199.215 port 45008 ssh2
Oct 09 16:11:39 compute-0 sshd-session[141459]: Connection closed by invalid user steam 134.199.199.215 port 45008 [preauth]
Oct 09 16:11:40 compute-0 nova_compute[117331]: 2025-10-09 16:11:40.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:40 compute-0 nova_compute[117331]: 2025-10-09 16:11:40.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:41 compute-0 unix_chkpwd[141463]: password check failed for user (root)
Oct 09 16:11:41 compute-0 sshd-session[141461]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:11:41 compute-0 podman[141464]: 2025-10-09 16:11:41.840752844 +0000 UTC m=+0.076483223 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=multipathd, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 09 16:11:43 compute-0 sshd-session[141461]: Failed password for root from 134.199.199.215 port 45018 ssh2
Oct 09 16:11:43 compute-0 sshd-session[141461]: Connection closed by authenticating user root 134.199.199.215 port 45018 [preauth]
Oct 09 16:11:43 compute-0 sshd-session[141486]: Invalid user tom from 134.199.199.215 port 45026
Oct 09 16:11:44 compute-0 sshd-session[141486]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:11:44 compute-0 sshd-session[141486]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:11:45 compute-0 nova_compute[117331]: 2025-10-09 16:11:45.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:45 compute-0 nova_compute[117331]: 2025-10-09 16:11:45.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:46 compute-0 sshd-session[141486]: Failed password for invalid user tom from 134.199.199.215 port 45026 ssh2
Oct 09 16:11:46 compute-0 sshd-session[141486]: Connection closed by invalid user tom 134.199.199.215 port 45026 [preauth]
Oct 09 16:11:46 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:46.369 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:05:65 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-294b97e0-a30d-4b0a-b0e1-af719c8ebc9d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-294b97e0-a30d-4b0a-b0e1-af719c8ebc9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2eb1e7f2a0a4826a63765b30279f43c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f582200c-6d00-4726-8ae8-9e9b61814a11, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=753036cd-5a61-4363-85a9-0bb7019888e7) old=Port_Binding(mac=['fa:16:3e:a8:05:65'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-294b97e0-a30d-4b0a-b0e1-af719c8ebc9d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-294b97e0-a30d-4b0a-b0e1-af719c8ebc9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2eb1e7f2a0a4826a63765b30279f43c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:11:46 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:46.370 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 753036cd-5a61-4363-85a9-0bb7019888e7 in datapath 294b97e0-a30d-4b0a-b0e1-af719c8ebc9d updated
Oct 09 16:11:46 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:46.372 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 294b97e0-a30d-4b0a-b0e1-af719c8ebc9d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:11:46 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:11:46.372 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed9a20b-cd5d-46e9-9176-87a72bed0db1]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:11:47 compute-0 sshd-session[141488]: Invalid user www from 134.199.199.215 port 34418
Oct 09 16:11:47 compute-0 sshd-session[141488]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:11:47 compute-0 sshd-session[141488]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:11:47 compute-0 podman[141490]: 2025-10-09 16:11:47.480239949 +0000 UTC m=+0.074961502 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:11:48 compute-0 sshd-session[141488]: Failed password for invalid user www from 134.199.199.215 port 34418 ssh2
Oct 09 16:11:49 compute-0 sshd-session[141488]: Connection closed by invalid user www 134.199.199.215 port 34418 [preauth]
Oct 09 16:11:50 compute-0 nova_compute[117331]: 2025-10-09 16:11:50.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:50 compute-0 nova_compute[117331]: 2025-10-09 16:11:50.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:50 compute-0 sshd-session[141514]: Invalid user hadoop from 134.199.199.215 port 34424
Oct 09 16:11:50 compute-0 sshd-session[141514]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:11:50 compute-0 sshd-session[141514]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:11:51 compute-0 podman[141516]: 2025-10-09 16:11:51.816317557 +0000 UTC m=+0.052967295 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4)
Oct 09 16:11:51 compute-0 podman[141517]: 2025-10-09 16:11:51.829104676 +0000 UTC m=+0.059572361 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.4)
Oct 09 16:11:52 compute-0 sshd-session[141514]: Failed password for invalid user hadoop from 134.199.199.215 port 34424 ssh2
Oct 09 16:11:52 compute-0 sshd-session[141514]: Connection closed by invalid user hadoop 134.199.199.215 port 34424 [preauth]
Oct 09 16:11:53 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:34438 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:11:55 compute-0 nova_compute[117331]: 2025-10-09 16:11:55.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:55 compute-0 nova_compute[117331]: 2025-10-09 16:11:55.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:11:56 compute-0 nova_compute[117331]: 2025-10-09 16:11:56.944 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Acquiring lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:11:56 compute-0 nova_compute[117331]: 2025-10-09 16:11:56.944 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:11:57 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:37670 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:11:57 compute-0 nova_compute[117331]: 2025-10-09 16:11:57.450 2 DEBUG nova.compute.manager [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:11:57 compute-0 nova_compute[117331]: 2025-10-09 16:11:57.992 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:11:57 compute-0 nova_compute[117331]: 2025-10-09 16:11:57.993 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:11:58 compute-0 nova_compute[117331]: 2025-10-09 16:11:58.000 2 DEBUG nova.virt.hardware [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:11:58 compute-0 nova_compute[117331]: 2025-10-09 16:11:58.000 2 INFO nova.compute.claims [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:11:58 compute-0 ovn_controller[19752]: 2025-10-09T16:11:58Z|00048|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Oct 09 16:11:59 compute-0 nova_compute[117331]: 2025-10-09 16:11:59.046 2 DEBUG nova.compute.provider_tree [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:11:59 compute-0 nova_compute[117331]: 2025-10-09 16:11:59.561 2 DEBUG nova.scheduler.client.report [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:11:59 compute-0 podman[127775]: time="2025-10-09T16:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:11:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:11:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3010 "" "Go-http-client/1.1"
Oct 09 16:12:00 compute-0 nova_compute[117331]: 2025-10-09 16:12:00.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:00 compute-0 nova_compute[117331]: 2025-10-09 16:12:00.070 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.077s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:00 compute-0 nova_compute[117331]: 2025-10-09 16:12:00.070 2 DEBUG nova.compute.manager [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:12:00 compute-0 nova_compute[117331]: 2025-10-09 16:12:00.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:00 compute-0 nova_compute[117331]: 2025-10-09 16:12:00.583 2 DEBUG nova.compute.manager [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:12:00 compute-0 nova_compute[117331]: 2025-10-09 16:12:00.584 2 DEBUG nova.network.neutron [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:12:00 compute-0 nova_compute[117331]: 2025-10-09 16:12:00.585 2 WARNING neutronclient.v2_0.client [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:12:00 compute-0 nova_compute[117331]: 2025-10-09 16:12:00.585 2 WARNING neutronclient.v2_0.client [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:12:00 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:37692 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:12:00 compute-0 podman[141554]: 2025-10-09 16:12:00.816985586 +0000 UTC m=+0.051677936 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vendor=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 09 16:12:01 compute-0 nova_compute[117331]: 2025-10-09 16:12:01.091 2 INFO nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:12:01 compute-0 openstack_network_exporter[129925]: ERROR   16:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:12:01 compute-0 openstack_network_exporter[129925]: ERROR   16:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:12:01 compute-0 openstack_network_exporter[129925]: ERROR   16:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:12:01 compute-0 openstack_network_exporter[129925]: ERROR   16:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:12:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:12:01 compute-0 openstack_network_exporter[129925]: ERROR   16:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:12:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:12:01 compute-0 nova_compute[117331]: 2025-10-09 16:12:01.597 2 DEBUG nova.compute.manager [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:12:01 compute-0 nova_compute[117331]: 2025-10-09 16:12:01.646 2 DEBUG nova.network.neutron [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Successfully created port: 9755a423-13f0-4f3a-8edd-bcc5b5190cfe _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.433 2 DEBUG nova.network.neutron [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Successfully updated port: 9755a423-13f0-4f3a-8edd-bcc5b5190cfe _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.498 2 DEBUG nova.compute.manager [req-3d7cac99-0127-4bbc-baa4-76ded8596943 req-a5feb5a2-e14c-46ad-85b6-9937fbb82ad1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Received event network-changed-9755a423-13f0-4f3a-8edd-bcc5b5190cfe external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.499 2 DEBUG nova.compute.manager [req-3d7cac99-0127-4bbc-baa4-76ded8596943 req-a5feb5a2-e14c-46ad-85b6-9937fbb82ad1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Refreshing instance network info cache due to event network-changed-9755a423-13f0-4f3a-8edd-bcc5b5190cfe. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.499 2 DEBUG oslo_concurrency.lockutils [req-3d7cac99-0127-4bbc-baa4-76ded8596943 req-a5feb5a2-e14c-46ad-85b6-9937fbb82ad1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-48c2872f-fffe-4ce0-8e59-8f5c86e5b07b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.499 2 DEBUG oslo_concurrency.lockutils [req-3d7cac99-0127-4bbc-baa4-76ded8596943 req-a5feb5a2-e14c-46ad-85b6-9937fbb82ad1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-48c2872f-fffe-4ce0-8e59-8f5c86e5b07b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.499 2 DEBUG nova.network.neutron [req-3d7cac99-0127-4bbc-baa4-76ded8596943 req-a5feb5a2-e14c-46ad-85b6-9937fbb82ad1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Refreshing network info cache for port 9755a423-13f0-4f3a-8edd-bcc5b5190cfe _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.615 2 DEBUG nova.compute.manager [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.616 2 DEBUG nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.617 2 INFO nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Creating image(s)
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.617 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Acquiring lock "/var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.617 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lock "/var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.618 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lock "/var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.619 2 DEBUG oslo_utils.imageutils.format_inspector [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.622 2 DEBUG oslo_utils.imageutils.format_inspector [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.623 2 DEBUG oslo_concurrency.processutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.676 2 DEBUG oslo_concurrency.processutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.677 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.677 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.678 2 DEBUG oslo_utils.imageutils.format_inspector [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.680 2 DEBUG oslo_utils.imageutils.format_inspector [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.681 2 DEBUG oslo_concurrency.processutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.733 2 DEBUG oslo_concurrency.processutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.733 2 DEBUG oslo_concurrency.processutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.859 2 DEBUG oslo_concurrency.processutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/disk 1073741824" returned: 0 in 0.125s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.860 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.183s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.860 2 DEBUG oslo_concurrency.processutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.915 2 DEBUG oslo_concurrency.processutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.916 2 DEBUG nova.virt.disk.api [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Checking if we can resize image /var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.916 2 DEBUG oslo_concurrency.processutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.939 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Acquiring lock "refresh_cache-48c2872f-fffe-4ce0-8e59-8f5c86e5b07b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.966 2 DEBUG oslo_concurrency.processutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.967 2 DEBUG nova.virt.disk.api [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Cannot resize image /var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.967 2 DEBUG nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.967 2 DEBUG nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Ensure instance console log exists: /var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.968 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.968 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:12:02 compute-0 nova_compute[117331]: 2025-10-09 16:12:02.968 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:03 compute-0 nova_compute[117331]: 2025-10-09 16:12:03.004 2 WARNING neutronclient.v2_0.client [req-3d7cac99-0127-4bbc-baa4-76ded8596943 req-a5feb5a2-e14c-46ad-85b6-9937fbb82ad1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:12:03 compute-0 nova_compute[117331]: 2025-10-09 16:12:03.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:12:03 compute-0 nova_compute[117331]: 2025-10-09 16:12:03.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 09 16:12:03 compute-0 nova_compute[117331]: 2025-10-09 16:12:03.648 2 DEBUG nova.network.neutron [req-3d7cac99-0127-4bbc-baa4-76ded8596943 req-a5feb5a2-e14c-46ad-85b6-9937fbb82ad1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:12:03 compute-0 nova_compute[117331]: 2025-10-09 16:12:03.814 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 09 16:12:03 compute-0 nova_compute[117331]: 2025-10-09 16:12:03.837 2 DEBUG nova.network.neutron [req-3d7cac99-0127-4bbc-baa4-76ded8596943 req-a5feb5a2-e14c-46ad-85b6-9937fbb82ad1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:12:03 compute-0 podman[141592]: 2025-10-09 16:12:03.87558567 +0000 UTC m=+0.108382796 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 09 16:12:03 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:37718 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:12:04 compute-0 nova_compute[117331]: 2025-10-09 16:12:04.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:12:04 compute-0 nova_compute[117331]: 2025-10-09 16:12:04.306 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 09 16:12:04 compute-0 nova_compute[117331]: 2025-10-09 16:12:04.343 2 DEBUG oslo_concurrency.lockutils [req-3d7cac99-0127-4bbc-baa4-76ded8596943 req-a5feb5a2-e14c-46ad-85b6-9937fbb82ad1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-48c2872f-fffe-4ce0-8e59-8f5c86e5b07b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:12:04 compute-0 nova_compute[117331]: 2025-10-09 16:12:04.344 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Acquired lock "refresh_cache-48c2872f-fffe-4ce0-8e59-8f5c86e5b07b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:12:04 compute-0 nova_compute[117331]: 2025-10-09 16:12:04.344 2 DEBUG nova.network.neutron [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:12:05 compute-0 nova_compute[117331]: 2025-10-09 16:12:05.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:05 compute-0 nova_compute[117331]: 2025-10-09 16:12:05.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:05 compute-0 nova_compute[117331]: 2025-10-09 16:12:05.478 2 DEBUG nova.network.neutron [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:12:05 compute-0 nova_compute[117331]: 2025-10-09 16:12:05.737 2 WARNING neutronclient.v2_0.client [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:12:05 compute-0 nova_compute[117331]: 2025-10-09 16:12:05.900 2 DEBUG nova.network.neutron [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Updating instance_info_cache with network_info: [{"id": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "address": "fa:16:3e:2d:5b:b2", "network": {"id": "19256da9-441e-477f-8195-1eea19c08437", "bridge": "br-int", "label": "tempest-TestDataModel-78218585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be0f7b79aa964c70b49cd8e6a20ecdf5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9755a423-13", "ovs_interfaceid": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.444 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Releasing lock "refresh_cache-48c2872f-fffe-4ce0-8e59-8f5c86e5b07b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.445 2 DEBUG nova.compute.manager [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Instance network_info: |[{"id": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "address": "fa:16:3e:2d:5b:b2", "network": {"id": "19256da9-441e-477f-8195-1eea19c08437", "bridge": "br-int", "label": "tempest-TestDataModel-78218585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be0f7b79aa964c70b49cd8e6a20ecdf5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9755a423-13", "ovs_interfaceid": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.447 2 DEBUG nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Start _get_guest_xml network_info=[{"id": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "address": "fa:16:3e:2d:5b:b2", "network": {"id": "19256da9-441e-477f-8195-1eea19c08437", "bridge": "br-int", "label": "tempest-TestDataModel-78218585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be0f7b79aa964c70b49cd8e6a20ecdf5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9755a423-13", "ovs_interfaceid": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.452 2 WARNING nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.455 2 DEBUG nova.virt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestDataModel-server-134861716', uuid='48c2872f-fffe-4ce0-8e59-8f5c86e5b07b'), owner=OwnerMeta(userid='ac3a468b64f844f9b4952888c88de098', username='tempest-TestDataModel-868619493-project-admin', projectid='e2eb1e7f2a0a4826a63765b30279f43c', projectname='tempest-TestDataModel-868619493'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "address": "fa:16:3e:2d:5b:b2", "network": {"id": "19256da9-441e-477f-8195-1eea19c08437", "bridge": "br-int", "label": "tempest-TestDataModel-78218585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be0f7b79aa964c70b49cd8e6a20ecdf5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9755a423-13", "ovs_interfaceid": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760026326.4551077) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.460 2 DEBUG nova.virt.libvirt.host [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.461 2 DEBUG nova.virt.libvirt.host [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.464 2 DEBUG nova.virt.libvirt.host [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.465 2 DEBUG nova.virt.libvirt.host [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.465 2 DEBUG nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.466 2 DEBUG nova.virt.hardware [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.466 2 DEBUG nova.virt.hardware [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.466 2 DEBUG nova.virt.hardware [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.466 2 DEBUG nova.virt.hardware [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.467 2 DEBUG nova.virt.hardware [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.467 2 DEBUG nova.virt.hardware [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.467 2 DEBUG nova.virt.hardware [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.467 2 DEBUG nova.virt.hardware [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.468 2 DEBUG nova.virt.hardware [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.468 2 DEBUG nova.virt.hardware [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.468 2 DEBUG nova.virt.hardware [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.472 2 DEBUG nova.virt.libvirt.vif [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:11:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestDataModel-server-134861716',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testdatamodel-server-134861716',id=2,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e2eb1e7f2a0a4826a63765b30279f43c',ramdisk_id='',reservation_id='r-0fq3s8iq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestDataModel-868619493',owner_user_name='tempest-TestDataModel-868619493-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:12:01Z,user_data=None,user_id='ac3a468b64f844f9b4952888c88de098',uuid=48c2872f-fffe-4ce0-8e59-8f5c86e5b07b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "address": "fa:16:3e:2d:5b:b2", "network": {"id": "19256da9-441e-477f-8195-1eea19c08437", "bridge": "br-int", "label": "tempest-TestDataModel-78218585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be0f7b79aa964c70b49cd8e6a20ecdf5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9755a423-13", "ovs_interfaceid": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.473 2 DEBUG nova.network.os_vif_util [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Converting VIF {"id": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "address": "fa:16:3e:2d:5b:b2", "network": {"id": "19256da9-441e-477f-8195-1eea19c08437", "bridge": "br-int", "label": "tempest-TestDataModel-78218585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be0f7b79aa964c70b49cd8e6a20ecdf5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9755a423-13", "ovs_interfaceid": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.473 2 DEBUG nova.network.os_vif_util [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:5b:b2,bridge_name='br-int',has_traffic_filtering=True,id=9755a423-13f0-4f3a-8edd-bcc5b5190cfe,network=Network(19256da9-441e-477f-8195-1eea19c08437),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9755a423-13') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.474 2 DEBUG nova.objects.instance [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lazy-loading 'pci_devices' on Instance uuid 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.984 2 DEBUG nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:12:06 compute-0 nova_compute[117331]:   <uuid>48c2872f-fffe-4ce0-8e59-8f5c86e5b07b</uuid>
Oct 09 16:12:06 compute-0 nova_compute[117331]:   <name>instance-00000002</name>
Oct 09 16:12:06 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:12:06 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:12:06 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <nova:name>tempest-TestDataModel-server-134861716</nova:name>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:12:06</nova:creationTime>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:12:06 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:12:06 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:12:06 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:12:06 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:12:06 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:12:06 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:12:06 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:12:06 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:12:06 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:12:06 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:12:06 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:12:06 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:12:06 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:12:06 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:12:06 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:12:06 compute-0 nova_compute[117331]:         <nova:user uuid="ac3a468b64f844f9b4952888c88de098">tempest-TestDataModel-868619493-project-admin</nova:user>
Oct 09 16:12:06 compute-0 nova_compute[117331]:         <nova:project uuid="e2eb1e7f2a0a4826a63765b30279f43c">tempest-TestDataModel-868619493</nova:project>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:12:06 compute-0 nova_compute[117331]:         <nova:port uuid="9755a423-13f0-4f3a-8edd-bcc5b5190cfe">
Oct 09 16:12:06 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:12:06 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:12:06 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <system>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <entry name="serial">48c2872f-fffe-4ce0-8e59-8f5c86e5b07b</entry>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <entry name="uuid">48c2872f-fffe-4ce0-8e59-8f5c86e5b07b</entry>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     </system>
Oct 09 16:12:06 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:12:06 compute-0 nova_compute[117331]:   <os>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:   </os>
Oct 09 16:12:06 compute-0 nova_compute[117331]:   <features>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:   </features>
Oct 09 16:12:06 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:12:06 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:12:06 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/disk"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/disk.config"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:2d:5b:b2"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <target dev="tap9755a423-13"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/console.log" append="off"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <video>
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     </video>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:12:06 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:12:06 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:12:06 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:12:06 compute-0 nova_compute[117331]: </domain>
Oct 09 16:12:06 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.987 2 DEBUG nova.compute.manager [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Preparing to wait for external event network-vif-plugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.987 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Acquiring lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.987 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.987 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.988 2 DEBUG nova.virt.libvirt.vif [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:11:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestDataModel-server-134861716',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testdatamodel-server-134861716',id=2,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e2eb1e7f2a0a4826a63765b30279f43c',ramdisk_id='',reservation_id='r-0fq3s8iq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestDataModel-868619493',owner_user_name='tempest-TestDataModel-868619493-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:12:01Z,user_data=None,user_id='ac3a468b64f844f9b4952888c88de098',uuid=48c2872f-fffe-4ce0-8e59-8f5c86e5b07b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "address": "fa:16:3e:2d:5b:b2", "network": {"id": "19256da9-441e-477f-8195-1eea19c08437", "bridge": "br-int", "label": "tempest-TestDataModel-78218585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be0f7b79aa964c70b49cd8e6a20ecdf5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9755a423-13", "ovs_interfaceid": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.988 2 DEBUG nova.network.os_vif_util [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Converting VIF {"id": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "address": "fa:16:3e:2d:5b:b2", "network": {"id": "19256da9-441e-477f-8195-1eea19c08437", "bridge": "br-int", "label": "tempest-TestDataModel-78218585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be0f7b79aa964c70b49cd8e6a20ecdf5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9755a423-13", "ovs_interfaceid": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.989 2 DEBUG nova.network.os_vif_util [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:5b:b2,bridge_name='br-int',has_traffic_filtering=True,id=9755a423-13f0-4f3a-8edd-bcc5b5190cfe,network=Network(19256da9-441e-477f-8195-1eea19c08437),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9755a423-13') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.989 2 DEBUG os_vif [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:5b:b2,bridge_name='br-int',has_traffic_filtering=True,id=9755a423-13f0-4f3a-8edd-bcc5b5190cfe,network=Network(19256da9-441e-477f-8195-1eea19c08437),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9755a423-13') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.990 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.991 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.992 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '164a4406-10d2-55b3-b082-2786e170a4a9', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.998 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9755a423-13, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.998 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap9755a423-13, col_values=(('qos', UUID('125f0f27-317a-426b-b1ec-88a0474fe47d')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:12:06 compute-0 nova_compute[117331]: 2025-10-09 16:12:06.998 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap9755a423-13, col_values=(('external_ids', {'iface-id': '9755a423-13f0-4f3a-8edd-bcc5b5190cfe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:5b:b2', 'vm-uuid': '48c2872f-fffe-4ce0-8e59-8f5c86e5b07b'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:12:07 compute-0 nova_compute[117331]: 2025-10-09 16:12:07.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:07 compute-0 nova_compute[117331]: 2025-10-09 16:12:07.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:07 compute-0 nova_compute[117331]: 2025-10-09 16:12:07.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:12:07 compute-0 NetworkManager[1028]: <info>  [1760026327.0033] manager: (tap9755a423-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Oct 09 16:12:07 compute-0 nova_compute[117331]: 2025-10-09 16:12:07.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:07 compute-0 nova_compute[117331]: 2025-10-09 16:12:07.010 2 INFO os_vif [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:5b:b2,bridge_name='br-int',has_traffic_filtering=True,id=9755a423-13f0-4f3a-8edd-bcc5b5190cfe,network=Network(19256da9-441e-477f-8195-1eea19c08437),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9755a423-13')
Oct 09 16:12:07 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:55780 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:12:07 compute-0 nova_compute[117331]: 2025-10-09 16:12:07.811 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:12:07 compute-0 nova_compute[117331]: 2025-10-09 16:12:07.812 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:12:07 compute-0 nova_compute[117331]: 2025-10-09 16:12:07.812 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:12:07 compute-0 nova_compute[117331]: 2025-10-09 16:12:07.812 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:12:07 compute-0 nova_compute[117331]: 2025-10-09 16:12:07.812 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:12:08 compute-0 nova_compute[117331]: 2025-10-09 16:12:08.323 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:12:08 compute-0 nova_compute[117331]: 2025-10-09 16:12:08.324 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:12:08 compute-0 nova_compute[117331]: 2025-10-09 16:12:08.324 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:08 compute-0 nova_compute[117331]: 2025-10-09 16:12:08.324 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:12:08 compute-0 nova_compute[117331]: 2025-10-09 16:12:08.566 2 DEBUG nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:12:08 compute-0 nova_compute[117331]: 2025-10-09 16:12:08.567 2 DEBUG nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:12:08 compute-0 nova_compute[117331]: 2025-10-09 16:12:08.567 2 DEBUG nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] No VIF found with MAC fa:16:3e:2d:5b:b2, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:12:08 compute-0 nova_compute[117331]: 2025-10-09 16:12:08.567 2 INFO nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Using config drive
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.100 2 WARNING neutronclient.v2_0.client [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.228 2 INFO nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Creating config drive at /var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/disk.config
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.233 2 DEBUG oslo_concurrency.processutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpb7vqo_f7 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.359 2 DEBUG oslo_concurrency.processutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpb7vqo_f7" returned: 0 in 0.126s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:12:09 compute-0 kernel: tap9755a423-13: entered promiscuous mode
Oct 09 16:12:09 compute-0 NetworkManager[1028]: <info>  [1760026329.4155] manager: (tap9755a423-13): new Tun device (/org/freedesktop/NetworkManager/Devices/27)
Oct 09 16:12:09 compute-0 ovn_controller[19752]: 2025-10-09T16:12:09Z|00049|binding|INFO|Claiming lport 9755a423-13f0-4f3a-8edd-bcc5b5190cfe for this chassis.
Oct 09 16:12:09 compute-0 ovn_controller[19752]: 2025-10-09T16:12:09Z|00050|binding|INFO|9755a423-13f0-4f3a-8edd-bcc5b5190cfe: Claiming fa:16:3e:2d:5b:b2 10.100.0.3
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:09 compute-0 systemd-machined[77487]: New machine qemu-2-instance-00000002.
Oct 09 16:12:09 compute-0 systemd-udevd[141640]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:12:09 compute-0 NetworkManager[1028]: <info>  [1760026329.4625] device (tap9755a423-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:12:09 compute-0 NetworkManager[1028]: <info>  [1760026329.4634] device (tap9755a423-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:09 compute-0 ovn_controller[19752]: 2025-10-09T16:12:09Z|00051|binding|INFO|Setting lport 9755a423-13f0-4f3a-8edd-bcc5b5190cfe ovn-installed in OVS
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:09 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.520 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:12:09 compute-0 ovn_controller[19752]: 2025-10-09T16:12:09Z|00052|binding|INFO|Setting lport 9755a423-13f0-4f3a-8edd-bcc5b5190cfe up in Southbound
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.554 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:5b:b2 10.100.0.3'], port_security=['fa:16:3e:2d:5b:b2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '48c2872f-fffe-4ce0-8e59-8f5c86e5b07b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19256da9-441e-477f-8195-1eea19c08437', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2eb1e7f2a0a4826a63765b30279f43c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4c699d7d-c8bc-44a5-90e4-91545cc51580', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a1624b0-a5ab-47cc-ba86-f82fec9b1800, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=9755a423-13f0-4f3a-8edd-bcc5b5190cfe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.555 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 9755a423-13f0-4f3a-8edd-bcc5b5190cfe in datapath 19256da9-441e-477f-8195-1eea19c08437 bound to our chassis
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.556 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 19256da9-441e-477f-8195-1eea19c08437
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.571 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[439df50d-e02b-4ce8-b4e2-7cab9d4dc34b]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.572 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap19256da9-41 in ovnmeta-19256da9-441e-477f-8195-1eea19c08437 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.575 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.576 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.582 139687 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap19256da9-40 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.582 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[7c90ead3-5cd6-4fe0-a716-a9df85e56e47]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.583 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[f09e83d9-b6e2-4ad7-b9ef-025d0b7a935d]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.594 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[c8fc254b-da51-4658-b943-21ceb9023f68]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.610 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[0d00b46a-6f49-4478-8367-220d9371a169]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.638 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[2cae079a-0b07-4508-bac2-6a36b363e9c1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.639 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:12:09 compute-0 systemd-udevd[141642]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.647 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[65132442-a572-4943-9e9f-b34d34e2a940]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:09 compute-0 NetworkManager[1028]: <info>  [1760026329.6486] manager: (tap19256da9-40): new Veth device (/org/freedesktop/NetworkManager/Devices/28)
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.676 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[e0853c83-71e9-4288-9a1c-0c44d4397649]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.678 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[030826e6-7cb7-4c35-ba93-4f4ebfbf9854]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:09 compute-0 NetworkManager[1028]: <info>  [1760026329.6954] device (tap19256da9-40): carrier: link connected
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.701 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[87dfe8a1-2593-472d-b7d2-0c21209bb3c0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.717 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[37a0e009-ac4a-479c-8647-d158d6b6f890]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19256da9-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:8f:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 127330, 'reachable_time': 22353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 141681, 'error': None, 'target': 'ovnmeta-19256da9-441e-477f-8195-1eea19c08437', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.734 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2cebe2-c6d4-4ebf-9cb8-b70f317d0846]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe12:8f8a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 127330, 'tstamp': 127330}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 141682, 'error': None, 'target': 'ovnmeta-19256da9-441e-477f-8195-1eea19c08437', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.751 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[525c4429-eb47-4d4e-86d3-d361fef96197]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19256da9-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:8f:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 127330, 'reachable_time': 22353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 141683, 'error': None, 'target': 'ovnmeta-19256da9-441e-477f-8195-1eea19c08437', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.777 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[e296f798-b703-4803-9bf1-fa995dc936d3]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.779 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.781 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.798 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.018s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.799 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6216MB free_disk=73.27185440063477GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.799 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.799 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.835 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a68f2c55-0e25-4deb-98ea-c26c7c024d32]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.837 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19256da9-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.837 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.837 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19256da9-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:09 compute-0 NetworkManager[1028]: <info>  [1760026329.8397] manager: (tap19256da9-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Oct 09 16:12:09 compute-0 kernel: tap19256da9-40: entered promiscuous mode
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.843 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap19256da9-40, col_values=(('external_ids', {'iface-id': 'a5b52431-d630-4de3-8ed8-9c5d94f8bb93'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:09 compute-0 ovn_controller[19752]: 2025-10-09T16:12:09Z|00053|binding|INFO|Releasing lport a5b52431-d630-4de3-8ed8-9c5d94f8bb93 from this chassis (sb_readonly=0)
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.857 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[be319ad7-02bc-47a4-810c-8db625d860b4]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.858 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/19256da9-441e-477f-8195-1eea19c08437.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/19256da9-441e-477f-8195-1eea19c08437.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.858 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/19256da9-441e-477f-8195-1eea19c08437.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/19256da9-441e-477f-8195-1eea19c08437.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.858 28613 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 19256da9-441e-477f-8195-1eea19c08437 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.858 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/19256da9-441e-477f-8195-1eea19c08437.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/19256da9-441e-477f-8195-1eea19c08437.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:12:09 compute-0 nova_compute[117331]: 2025-10-09 16:12:09.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.859 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[b110fde9-e096-4892-bf70-bd4674011a46]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.859 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/19256da9-441e-477f-8195-1eea19c08437.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/19256da9-441e-477f-8195-1eea19c08437.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.860 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[dcf3655e-a3c9-43d8-b30c-7e86e3a1325b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.860 28613 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: global
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     log         /dev/log local0 debug
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     log-tag     haproxy-metadata-proxy-19256da9-441e-477f-8195-1eea19c08437
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     user        root
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     group       root
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     maxconn     1024
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     pidfile     /var/lib/neutron/external/pids/19256da9-441e-477f-8195-1eea19c08437.pid.haproxy
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     daemon
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: defaults
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     log global
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     mode http
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     option httplog
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     option dontlognull
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     option http-server-close
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     option forwardfor
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     retries                 3
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     timeout http-request    30s
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     timeout connect         30s
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     timeout client          32s
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     timeout server          32s
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     timeout http-keep-alive 30s
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: listen listener
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     bind 169.254.169.254:80
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:     http-request add-header X-OVN-Network-ID 19256da9-441e-477f-8195-1eea19c08437
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 09 16:12:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:09.860 28613 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-19256da9-441e-477f-8195-1eea19c08437', 'env', 'PROCESS_TAG=haproxy-19256da9-441e-477f-8195-1eea19c08437', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/19256da9-441e-477f-8195-1eea19c08437.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.020 2 DEBUG nova.compute.manager [req-d99ae566-3fcf-4d5e-9769-aeb534db4886 req-ab6fed98-69bf-41b2-ab5c-d79931a38d30 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Received event network-vif-plugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.020 2 DEBUG oslo_concurrency.lockutils [req-d99ae566-3fcf-4d5e-9769-aeb534db4886 req-ab6fed98-69bf-41b2-ab5c-d79931a38d30 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.020 2 DEBUG oslo_concurrency.lockutils [req-d99ae566-3fcf-4d5e-9769-aeb534db4886 req-ab6fed98-69bf-41b2-ab5c-d79931a38d30 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.021 2 DEBUG oslo_concurrency.lockutils [req-d99ae566-3fcf-4d5e-9769-aeb534db4886 req-ab6fed98-69bf-41b2-ab5c-d79931a38d30 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.021 2 DEBUG nova.compute.manager [req-d99ae566-3fcf-4d5e-9769-aeb534db4886 req-ab6fed98-69bf-41b2-ab5c-d79931a38d30 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Processing event network-vif-plugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.261 2 DEBUG nova.compute.manager [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.264 2 DEBUG nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.267 2 INFO nova.virt.libvirt.driver [-] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Instance spawned successfully.
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.268 2 DEBUG nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:12:10 compute-0 podman[141722]: 2025-10-09 16:12:10.269975798 +0000 UTC m=+0.058714865 container create cc13eb6b08682899a26dbe7eabbcd2ea11b753a1d58e228cf05d6f6b4963a01e (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-19256da9-441e-477f-8195-1eea19c08437, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest)
Oct 09 16:12:10 compute-0 systemd[1]: Started libpod-conmon-cc13eb6b08682899a26dbe7eabbcd2ea11b753a1d58e228cf05d6f6b4963a01e.scope.
Oct 09 16:12:10 compute-0 podman[141722]: 2025-10-09 16:12:10.234900712 +0000 UTC m=+0.023639799 image pull 4e15c189a95c5dcd1203c76fe304de5c8f8616da9a258ca168cc54a517f49b87 38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 09 16:12:10 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31189975f94aac5b2dda5b9f34737bc5d29d7d0b56a11e4bf94331bd916e7f34/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 16:12:10 compute-0 podman[141722]: 2025-10-09 16:12:10.351660729 +0000 UTC m=+0.140399826 container init cc13eb6b08682899a26dbe7eabbcd2ea11b753a1d58e228cf05d6f6b4963a01e (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-19256da9-441e-477f-8195-1eea19c08437, tcib_managed=true, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 09 16:12:10 compute-0 podman[141722]: 2025-10-09 16:12:10.357011436 +0000 UTC m=+0.145750513 container start cc13eb6b08682899a26dbe7eabbcd2ea11b753a1d58e228cf05d6f6b4963a01e (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-19256da9-441e-477f-8195-1eea19c08437, tcib_managed=true, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007)
Oct 09 16:12:10 compute-0 neutron-haproxy-ovnmeta-19256da9-441e-477f-8195-1eea19c08437[141737]: [NOTICE]   (141742) : New worker (141744) forked
Oct 09 16:12:10 compute-0 neutron-haproxy-ovnmeta-19256da9-441e-477f-8195-1eea19c08437[141737]: [NOTICE]   (141742) : Loading success.
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.782 2 DEBUG nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.783 2 DEBUG nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.783 2 DEBUG nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.784 2 DEBUG nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.784 2 DEBUG nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.785 2 DEBUG nova.virt.libvirt.driver [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.898 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.899 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.899 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:12:09 up 21 min,  0 user,  load average: 0.17, 0.24, 0.31\n', 'num_instances': '1', 'num_vm_building': '1', 'num_task_spawning': '1', 'num_os_type_None': '1', 'num_proj_e2eb1e7f2a0a4826a63765b30279f43c': '1', 'io_workload': '1'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:12:10 compute-0 nova_compute[117331]: 2025-10-09 16:12:10.933 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:12:11 compute-0 nova_compute[117331]: 2025-10-09 16:12:11.294 2 INFO nova.compute.manager [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Took 8.68 seconds to spawn the instance on the hypervisor.
Oct 09 16:12:11 compute-0 nova_compute[117331]: 2025-10-09 16:12:11.294 2 DEBUG nova.compute.manager [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:12:11 compute-0 nova_compute[117331]: 2025-10-09 16:12:11.440 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:12:11 compute-0 nova_compute[117331]: 2025-10-09 16:12:11.836 2 INFO nova.compute.manager [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Took 13.88 seconds to build instance.
Oct 09 16:12:11 compute-0 nova_compute[117331]: 2025-10-09 16:12:11.950 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:12:11 compute-0 nova_compute[117331]: 2025-10-09 16:12:11.950 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.151s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:12 compute-0 nova_compute[117331]: 2025-10-09 16:12:12.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:12 compute-0 unix_chkpwd[141755]: password check failed for user (root)
Oct 09 16:12:12 compute-0 sshd-session[141753]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:12:12 compute-0 nova_compute[117331]: 2025-10-09 16:12:12.078 2 DEBUG nova.compute.manager [req-25c174c8-0b5f-4106-91ab-b142eae9066e req-bff70f9c-adf6-4a13-9267-e3380908f120 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Received event network-vif-plugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:12:12 compute-0 nova_compute[117331]: 2025-10-09 16:12:12.079 2 DEBUG oslo_concurrency.lockutils [req-25c174c8-0b5f-4106-91ab-b142eae9066e req-bff70f9c-adf6-4a13-9267-e3380908f120 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:12:12 compute-0 nova_compute[117331]: 2025-10-09 16:12:12.080 2 DEBUG oslo_concurrency.lockutils [req-25c174c8-0b5f-4106-91ab-b142eae9066e req-bff70f9c-adf6-4a13-9267-e3380908f120 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:12:12 compute-0 nova_compute[117331]: 2025-10-09 16:12:12.080 2 DEBUG oslo_concurrency.lockutils [req-25c174c8-0b5f-4106-91ab-b142eae9066e req-bff70f9c-adf6-4a13-9267-e3380908f120 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:12 compute-0 nova_compute[117331]: 2025-10-09 16:12:12.080 2 DEBUG nova.compute.manager [req-25c174c8-0b5f-4106-91ab-b142eae9066e req-bff70f9c-adf6-4a13-9267-e3380908f120 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] No waiting events found dispatching network-vif-plugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:12:12 compute-0 nova_compute[117331]: 2025-10-09 16:12:12.080 2 WARNING nova.compute.manager [req-25c174c8-0b5f-4106-91ab-b142eae9066e req-bff70f9c-adf6-4a13-9267-e3380908f120 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Received unexpected event network-vif-plugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe for instance with vm_state active and task_state None.
Oct 09 16:12:12 compute-0 nova_compute[117331]: 2025-10-09 16:12:12.344 2 DEBUG oslo_concurrency.lockutils [None req-e6051185-3522-4f22-a17e-eded6a581ec0 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.399s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:12 compute-0 nova_compute[117331]: 2025-10-09 16:12:12.442 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:12:12 compute-0 nova_compute[117331]: 2025-10-09 16:12:12.442 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:12:12 compute-0 podman[141756]: 2025-10-09 16:12:12.821131689 +0000 UTC m=+0.054427011 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 09 16:12:12 compute-0 nova_compute[117331]: 2025-10-09 16:12:12.951 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:12:12 compute-0 nova_compute[117331]: 2025-10-09 16:12:12.951 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:12:13 compute-0 sshd-session[141753]: Failed password for root from 134.199.199.215 port 55792 ssh2
Oct 09 16:12:14 compute-0 sshd-session[141776]: Invalid user alex from 134.199.199.215 port 55800
Oct 09 16:12:14 compute-0 sshd-session[141776]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:12:14 compute-0 sshd-session[141776]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:12:14 compute-0 sshd-session[141753]: Connection closed by authenticating user root 134.199.199.215 port 55792 [preauth]
Oct 09 16:12:15 compute-0 nova_compute[117331]: 2025-10-09 16:12:15.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:16 compute-0 sshd-session[141776]: Failed password for invalid user alex from 134.199.199.215 port 55800 ssh2
Oct 09 16:12:17 compute-0 nova_compute[117331]: 2025-10-09 16:12:17.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:17 compute-0 sshd-session[141778]: Invalid user admin from 134.199.199.215 port 33696
Oct 09 16:12:17 compute-0 sshd-session[141778]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:12:17 compute-0 sshd-session[141778]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:12:17 compute-0 sshd-session[141776]: Connection closed by invalid user alex 134.199.199.215 port 55800 [preauth]
Oct 09 16:12:17 compute-0 podman[141780]: 2025-10-09 16:12:17.819232586 +0000 UTC m=+0.053628755 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:12:20 compute-0 sshd-session[141778]: Failed password for invalid user admin from 134.199.199.215 port 33696 ssh2
Oct 09 16:12:20 compute-0 nova_compute[117331]: 2025-10-09 16:12:20.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:20 compute-0 sshd-session[141806]: Invalid user postgres from 134.199.199.215 port 33726
Oct 09 16:12:20 compute-0 sshd-session[141806]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:12:20 compute-0 sshd-session[141806]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:12:22 compute-0 nova_compute[117331]: 2025-10-09 16:12:22.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:22 compute-0 sshd-session[141778]: Connection closed by invalid user admin 134.199.199.215 port 33696 [preauth]
Oct 09 16:12:22 compute-0 podman[141826]: 2025-10-09 16:12:22.835424128 +0000 UTC m=+0.062330408 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 09 16:12:22 compute-0 podman[141827]: 2025-10-09 16:12:22.852591743 +0000 UTC m=+0.063219585 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251007)
Oct 09 16:12:22 compute-0 sshd-session[141806]: Failed password for invalid user postgres from 134.199.199.215 port 33726 ssh2
Oct 09 16:12:23 compute-0 sshd-session[141806]: Connection closed by invalid user postgres 134.199.199.215 port 33726 [preauth]
Oct 09 16:12:23 compute-0 ovn_controller[19752]: 2025-10-09T16:12:23Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:5b:b2 10.100.0.3
Oct 09 16:12:23 compute-0 ovn_controller[19752]: 2025-10-09T16:12:23Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:5b:b2 10.100.0.3
Oct 09 16:12:24 compute-0 unix_chkpwd[141865]: password check failed for user (root)
Oct 09 16:12:24 compute-0 sshd-session[141863]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:12:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:24.874 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:12:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:24.876 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:12:24 compute-0 nova_compute[117331]: 2025-10-09 16:12:24.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:25 compute-0 nova_compute[117331]: 2025-10-09 16:12:25.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:25 compute-0 sshd-session[141863]: Failed password for root from 134.199.199.215 port 33752 ssh2
Oct 09 16:12:26 compute-0 sshd-session[141863]: Connection closed by authenticating user root 134.199.199.215 port 33752 [preauth]
Oct 09 16:12:27 compute-0 nova_compute[117331]: 2025-10-09 16:12:27.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:27 compute-0 sshd-session[141867]: Invalid user linux from 134.199.199.215 port 50190
Oct 09 16:12:27 compute-0 sshd-session[141867]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:12:27 compute-0 sshd-session[141867]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:12:29 compute-0 podman[127775]: time="2025-10-09T16:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:12:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:12:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3480 "" "Go-http-client/1.1"
Oct 09 16:12:30 compute-0 sshd-session[141867]: Failed password for invalid user linux from 134.199.199.215 port 50190 ssh2
Oct 09 16:12:30 compute-0 nova_compute[117331]: 2025-10-09 16:12:30.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:31 compute-0 unix_chkpwd[141871]: password check failed for user (root)
Oct 09 16:12:31 compute-0 sshd-session[141869]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:12:31 compute-0 sshd-session[141867]: Connection closed by invalid user linux 134.199.199.215 port 50190 [preauth]
Oct 09 16:12:31 compute-0 openstack_network_exporter[129925]: ERROR   16:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:12:31 compute-0 openstack_network_exporter[129925]: ERROR   16:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:12:31 compute-0 openstack_network_exporter[129925]: ERROR   16:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:12:31 compute-0 openstack_network_exporter[129925]: ERROR   16:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:12:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:12:31 compute-0 openstack_network_exporter[129925]: ERROR   16:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:12:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:12:31 compute-0 podman[141872]: 2025-10-09 16:12:31.814358777 +0000 UTC m=+0.050485787 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public)
Oct 09 16:12:32 compute-0 nova_compute[117331]: 2025-10-09 16:12:32.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:32 compute-0 sshd-session[141869]: Failed password for root from 134.199.199.215 port 50204 ssh2
Oct 09 16:12:32 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:32.877 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:12:33 compute-0 sshd-session[141869]: Connection closed by authenticating user root 134.199.199.215 port 50204 [preauth]
Oct 09 16:12:34 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:50218 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:12:34 compute-0 podman[141893]: 2025-10-09 16:12:34.852218706 +0000 UTC m=+0.084715947 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 16:12:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:35.283 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:12:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:35.284 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:12:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:35.285 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:35 compute-0 nova_compute[117331]: 2025-10-09 16:12:35.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:37 compute-0 nova_compute[117331]: 2025-10-09 16:12:37.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:37 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:50806 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:12:40 compute-0 nova_compute[117331]: 2025-10-09 16:12:40.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:40 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:50834 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:12:42 compute-0 nova_compute[117331]: 2025-10-09 16:12:42.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:43 compute-0 podman[141921]: 2025-10-09 16:12:43.867187281 +0000 UTC m=+0.086148050 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Oct 09 16:12:44 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:50850 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:12:45 compute-0 nova_compute[117331]: 2025-10-09 16:12:45.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:47 compute-0 nova_compute[117331]: 2025-10-09 16:12:47.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:47 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:43250 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:12:48 compute-0 podman[141943]: 2025-10-09 16:12:48.817339087 +0000 UTC m=+0.048344185 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 09 16:12:50 compute-0 nova_compute[117331]: 2025-10-09 16:12:50.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:51 compute-0 sshd-session[141969]: Invalid user es from 134.199.199.215 port 43266
Oct 09 16:12:51 compute-0 sshd-session[141969]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:12:51 compute-0 sshd-session[141969]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:12:52 compute-0 nova_compute[117331]: 2025-10-09 16:12:52.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:52 compute-0 nova_compute[117331]: 2025-10-09 16:12:52.248 2 DEBUG oslo_concurrency.lockutils [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Acquiring lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:12:52 compute-0 nova_compute[117331]: 2025-10-09 16:12:52.248 2 DEBUG oslo_concurrency.lockutils [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:12:52 compute-0 nova_compute[117331]: 2025-10-09 16:12:52.248 2 DEBUG oslo_concurrency.lockutils [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Acquiring lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:12:52 compute-0 nova_compute[117331]: 2025-10-09 16:12:52.249 2 DEBUG oslo_concurrency.lockutils [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:12:52 compute-0 nova_compute[117331]: 2025-10-09 16:12:52.249 2 DEBUG oslo_concurrency.lockutils [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:52 compute-0 nova_compute[117331]: 2025-10-09 16:12:52.306 2 INFO nova.compute.manager [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Terminating instance
Oct 09 16:12:52 compute-0 nova_compute[117331]: 2025-10-09 16:12:52.884 2 DEBUG nova.compute.manager [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 09 16:12:52 compute-0 kernel: tap9755a423-13 (unregistering): left promiscuous mode
Oct 09 16:12:52 compute-0 NetworkManager[1028]: <info>  [1760026372.9097] device (tap9755a423-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:12:52 compute-0 nova_compute[117331]: 2025-10-09 16:12:52.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:52 compute-0 ovn_controller[19752]: 2025-10-09T16:12:52Z|00054|binding|INFO|Releasing lport 9755a423-13f0-4f3a-8edd-bcc5b5190cfe from this chassis (sb_readonly=0)
Oct 09 16:12:52 compute-0 ovn_controller[19752]: 2025-10-09T16:12:52Z|00055|binding|INFO|Setting lport 9755a423-13f0-4f3a-8edd-bcc5b5190cfe down in Southbound
Oct 09 16:12:52 compute-0 ovn_controller[19752]: 2025-10-09T16:12:52Z|00056|binding|INFO|Removing iface tap9755a423-13 ovn-installed in OVS
Oct 09 16:12:52 compute-0 nova_compute[117331]: 2025-10-09 16:12:52.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:52.936 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:5b:b2 10.100.0.3'], port_security=['fa:16:3e:2d:5b:b2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '48c2872f-fffe-4ce0-8e59-8f5c86e5b07b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19256da9-441e-477f-8195-1eea19c08437', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2eb1e7f2a0a4826a63765b30279f43c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '4c699d7d-c8bc-44a5-90e4-91545cc51580', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a1624b0-a5ab-47cc-ba86-f82fec9b1800, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=9755a423-13f0-4f3a-8edd-bcc5b5190cfe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:12:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:52.937 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 9755a423-13f0-4f3a-8edd-bcc5b5190cfe in datapath 19256da9-441e-477f-8195-1eea19c08437 unbound from our chassis
Oct 09 16:12:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:52.938 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 19256da9-441e-477f-8195-1eea19c08437, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:12:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:52.939 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[550580fe-1772-4c46-a437-e4fa642deed6]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:52.940 28613 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-19256da9-441e-477f-8195-1eea19c08437 namespace which is not needed anymore
Oct 09 16:12:52 compute-0 nova_compute[117331]: 2025-10-09 16:12:52.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:52 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct 09 16:12:52 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 12.792s CPU time.
Oct 09 16:12:52 compute-0 systemd-machined[77487]: Machine qemu-2-instance-00000002 terminated.
Oct 09 16:12:52 compute-0 podman[141972]: 2025-10-09 16:12:52.993501686 +0000 UTC m=+0.054016216 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 09 16:12:53 compute-0 podman[141975]: 2025-10-09 16:12:53.002131271 +0000 UTC m=+0.057809056 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 09 16:12:53 compute-0 neutron-haproxy-ovnmeta-19256da9-441e-477f-8195-1eea19c08437[141737]: [NOTICE]   (141742) : haproxy version is 3.0.5-8e879a5
Oct 09 16:12:53 compute-0 neutron-haproxy-ovnmeta-19256da9-441e-477f-8195-1eea19c08437[141737]: [NOTICE]   (141742) : path to executable is /usr/sbin/haproxy
Oct 09 16:12:53 compute-0 neutron-haproxy-ovnmeta-19256da9-441e-477f-8195-1eea19c08437[141737]: [WARNING]  (141742) : Exiting Master process...
Oct 09 16:12:53 compute-0 podman[142029]: 2025-10-09 16:12:53.044309338 +0000 UTC m=+0.026460326 container kill cc13eb6b08682899a26dbe7eabbcd2ea11b753a1d58e228cf05d6f6b4963a01e (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-19256da9-441e-477f-8195-1eea19c08437, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 16:12:53 compute-0 neutron-haproxy-ovnmeta-19256da9-441e-477f-8195-1eea19c08437[141737]: [ALERT]    (141742) : Current worker (141744) exited with code 143 (Terminated)
Oct 09 16:12:53 compute-0 neutron-haproxy-ovnmeta-19256da9-441e-477f-8195-1eea19c08437[141737]: [WARNING]  (141742) : All workers exited. Exiting... (0)
Oct 09 16:12:53 compute-0 systemd[1]: libpod-cc13eb6b08682899a26dbe7eabbcd2ea11b753a1d58e228cf05d6f6b4963a01e.scope: Deactivated successfully.
Oct 09 16:12:53 compute-0 podman[142045]: 2025-10-09 16:12:53.08854039 +0000 UTC m=+0.026068332 container died cc13eb6b08682899a26dbe7eabbcd2ea11b753a1d58e228cf05d6f6b4963a01e (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-19256da9-441e-477f-8195-1eea19c08437, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 09 16:12:53 compute-0 kernel: tap9755a423-13: entered promiscuous mode
Oct 09 16:12:53 compute-0 NetworkManager[1028]: <info>  [1760026373.1042] manager: (tap9755a423-13): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Oct 09 16:12:53 compute-0 systemd-udevd[141999]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:53 compute-0 ovn_controller[19752]: 2025-10-09T16:12:53Z|00057|binding|INFO|Claiming lport 9755a423-13f0-4f3a-8edd-bcc5b5190cfe for this chassis.
Oct 09 16:12:53 compute-0 ovn_controller[19752]: 2025-10-09T16:12:53Z|00058|binding|INFO|9755a423-13f0-4f3a-8edd-bcc5b5190cfe: Claiming fa:16:3e:2d:5b:b2 10.100.0.3
Oct 09 16:12:53 compute-0 kernel: tap9755a423-13 (unregistering): left promiscuous mode
Oct 09 16:12:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:53.120 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:5b:b2 10.100.0.3'], port_security=['fa:16:3e:2d:5b:b2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '48c2872f-fffe-4ce0-8e59-8f5c86e5b07b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19256da9-441e-477f-8195-1eea19c08437', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2eb1e7f2a0a4826a63765b30279f43c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '4c699d7d-c8bc-44a5-90e4-91545cc51580', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a1624b0-a5ab-47cc-ba86-f82fec9b1800, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=9755a423-13f0-4f3a-8edd-bcc5b5190cfe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:12:53 compute-0 ovn_controller[19752]: 2025-10-09T16:12:53Z|00059|binding|INFO|Setting lport 9755a423-13f0-4f3a-8edd-bcc5b5190cfe ovn-installed in OVS
Oct 09 16:12:53 compute-0 ovn_controller[19752]: 2025-10-09T16:12:53Z|00060|binding|INFO|Setting lport 9755a423-13f0-4f3a-8edd-bcc5b5190cfe up in Southbound
Oct 09 16:12:53 compute-0 ovn_controller[19752]: 2025-10-09T16:12:53Z|00061|binding|INFO|Releasing lport 9755a423-13f0-4f3a-8edd-bcc5b5190cfe from this chassis (sb_readonly=1)
Oct 09 16:12:53 compute-0 ovn_controller[19752]: 2025-10-09T16:12:53Z|00062|if_status|INFO|Not setting lport 9755a423-13f0-4f3a-8edd-bcc5b5190cfe down as sb is readonly
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:53 compute-0 ovn_controller[19752]: 2025-10-09T16:12:53Z|00063|binding|INFO|Removing iface tap9755a423-13 ovn-installed in OVS
Oct 09 16:12:53 compute-0 ovn_controller[19752]: 2025-10-09T16:12:53Z|00064|binding|INFO|Releasing lport 9755a423-13f0-4f3a-8edd-bcc5b5190cfe from this chassis (sb_readonly=0)
Oct 09 16:12:53 compute-0 ovn_controller[19752]: 2025-10-09T16:12:53Z|00065|binding|INFO|Setting lport 9755a423-13f0-4f3a-8edd-bcc5b5190cfe down in Southbound
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:53.150 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:5b:b2 10.100.0.3'], port_security=['fa:16:3e:2d:5b:b2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '48c2872f-fffe-4ce0-8e59-8f5c86e5b07b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19256da9-441e-477f-8195-1eea19c08437', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2eb1e7f2a0a4826a63765b30279f43c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '4c699d7d-c8bc-44a5-90e4-91545cc51580', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a1624b0-a5ab-47cc-ba86-f82fec9b1800, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=9755a423-13f0-4f3a-8edd-bcc5b5190cfe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.151 2 INFO nova.virt.libvirt.driver [-] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Instance destroyed successfully.
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.151 2 DEBUG nova.objects.instance [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lazy-loading 'resources' on Instance uuid 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:12:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cc13eb6b08682899a26dbe7eabbcd2ea11b753a1d58e228cf05d6f6b4963a01e-userdata-shm.mount: Deactivated successfully.
Oct 09 16:12:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-31189975f94aac5b2dda5b9f34737bc5d29d7d0b56a11e4bf94331bd916e7f34-merged.mount: Deactivated successfully.
Oct 09 16:12:53 compute-0 podman[142045]: 2025-10-09 16:12:53.26486474 +0000 UTC m=+0.202392662 container cleanup cc13eb6b08682899a26dbe7eabbcd2ea11b753a1d58e228cf05d6f6b4963a01e (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-19256da9-441e-477f-8195-1eea19c08437, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007)
Oct 09 16:12:53 compute-0 systemd[1]: libpod-conmon-cc13eb6b08682899a26dbe7eabbcd2ea11b753a1d58e228cf05d6f6b4963a01e.scope: Deactivated successfully.
Oct 09 16:12:53 compute-0 podman[142057]: 2025-10-09 16:12:53.40326471 +0000 UTC m=+0.314092480 container remove cc13eb6b08682899a26dbe7eabbcd2ea11b753a1d58e228cf05d6f6b4963a01e (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-19256da9-441e-477f-8195-1eea19c08437, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Oct 09 16:12:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:53.412 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[7e279320-6090-490f-b32e-d035a7c554e7]: (4, ("Thu Oct  9 04:12:53 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-19256da9-441e-477f-8195-1eea19c08437 (cc13eb6b08682899a26dbe7eabbcd2ea11b753a1d58e228cf05d6f6b4963a01e)\ncc13eb6b08682899a26dbe7eabbcd2ea11b753a1d58e228cf05d6f6b4963a01e\nThu Oct  9 04:12:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-19256da9-441e-477f-8195-1eea19c08437 (cc13eb6b08682899a26dbe7eabbcd2ea11b753a1d58e228cf05d6f6b4963a01e)\ncc13eb6b08682899a26dbe7eabbcd2ea11b753a1d58e228cf05d6f6b4963a01e\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:53.413 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[5e127a1a-6a19-4877-8e37-5a92aba20ee9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:53.414 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/19256da9-441e-477f-8195-1eea19c08437.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/19256da9-441e-477f-8195-1eea19c08437.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:12:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:53.414 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[0c89d52a-78b1-42c7-9fb5-6e72857b72b5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:53.415 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19256da9-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:53 compute-0 kernel: tap19256da9-40: left promiscuous mode
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:53.436 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[b64df19b-c90f-409c-8d9f-53e957b71ef7]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:53.462 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[690cef07-2337-4061-a4ea-2d1983c8de39]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:53.464 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3e057a6b-e691-4847-97c3-0a95dc4310fe]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:53.481 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[cc37ea6f-bd6d-44ad-879b-e5ee7ee5af60]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 127323, 'reachable_time': 42136, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 142091, 'error': None, 'target': 'ovnmeta-19256da9-441e-477f-8195-1eea19c08437', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:53.485 28727 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-19256da9-441e-477f-8195-1eea19c08437 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 09 16:12:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:53.485 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[5702e47a-6628-405e-ac37-47f8b4a6572a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:53.486 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 9755a423-13f0-4f3a-8edd-bcc5b5190cfe in datapath 19256da9-441e-477f-8195-1eea19c08437 unbound from our chassis
Oct 09 16:12:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d19256da9\x2d441e\x2d477f\x2d8195\x2d1eea19c08437.mount: Deactivated successfully.
Oct 09 16:12:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:53.487 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 19256da9-441e-477f-8195-1eea19c08437, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:12:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:53.488 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[17421183-64dc-4dd8-b11d-7ee575a9be09]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:53.489 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 9755a423-13f0-4f3a-8edd-bcc5b5190cfe in datapath 19256da9-441e-477f-8195-1eea19c08437 unbound from our chassis
Oct 09 16:12:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:53.489 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 19256da9-441e-477f-8195-1eea19c08437, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:12:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:12:53.490 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[6577ac2e-5081-4607-bd60-27ad8a50dcc9]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.561 2 DEBUG nova.compute.manager [req-4f069ce8-3bf4-40af-97ea-f066d74cc52f req-2c46428c-084e-4e46-8c92-b8dc921ae4ce ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Received event network-vif-unplugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.562 2 DEBUG oslo_concurrency.lockutils [req-4f069ce8-3bf4-40af-97ea-f066d74cc52f req-2c46428c-084e-4e46-8c92-b8dc921ae4ce ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.562 2 DEBUG oslo_concurrency.lockutils [req-4f069ce8-3bf4-40af-97ea-f066d74cc52f req-2c46428c-084e-4e46-8c92-b8dc921ae4ce ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.562 2 DEBUG oslo_concurrency.lockutils [req-4f069ce8-3bf4-40af-97ea-f066d74cc52f req-2c46428c-084e-4e46-8c92-b8dc921ae4ce ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.562 2 DEBUG nova.compute.manager [req-4f069ce8-3bf4-40af-97ea-f066d74cc52f req-2c46428c-084e-4e46-8c92-b8dc921ae4ce ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] No waiting events found dispatching network-vif-unplugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.563 2 DEBUG nova.compute.manager [req-4f069ce8-3bf4-40af-97ea-f066d74cc52f req-2c46428c-084e-4e46-8c92-b8dc921ae4ce ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Received event network-vif-unplugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.659 2 DEBUG nova.virt.libvirt.vif [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:11:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestDataModel-server-134861716',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testdatamodel-server-134861716',id=2,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:12:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e2eb1e7f2a0a4826a63765b30279f43c',ramdisk_id='',reservation_id='r-0fq3s8iq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestDataModel-868619493',owner_user_name='tempest-TestDataModel-868619493-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:12:11Z,user_data=None,user_id='ac3a468b64f844f9b4952888c88de098',uuid=48c2872f-fffe-4ce0-8e59-8f5c86e5b07b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "address": "fa:16:3e:2d:5b:b2", "network": {"id": "19256da9-441e-477f-8195-1eea19c08437", "bridge": "br-int", "label": "tempest-TestDataModel-78218585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be0f7b79aa964c70b49cd8e6a20ecdf5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9755a423-13", "ovs_interfaceid": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.659 2 DEBUG nova.network.os_vif_util [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Converting VIF {"id": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "address": "fa:16:3e:2d:5b:b2", "network": {"id": "19256da9-441e-477f-8195-1eea19c08437", "bridge": "br-int", "label": "tempest-TestDataModel-78218585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be0f7b79aa964c70b49cd8e6a20ecdf5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9755a423-13", "ovs_interfaceid": "9755a423-13f0-4f3a-8edd-bcc5b5190cfe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.660 2 DEBUG nova.network.os_vif_util [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:5b:b2,bridge_name='br-int',has_traffic_filtering=True,id=9755a423-13f0-4f3a-8edd-bcc5b5190cfe,network=Network(19256da9-441e-477f-8195-1eea19c08437),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9755a423-13') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.661 2 DEBUG os_vif [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:5b:b2,bridge_name='br-int',has_traffic_filtering=True,id=9755a423-13f0-4f3a-8edd-bcc5b5190cfe,network=Network(19256da9-441e-477f-8195-1eea19c08437),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9755a423-13') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.664 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9755a423-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.667 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=125f0f27-317a-426b-b1ec-88a0474fe47d) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.670 2 INFO os_vif [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:5b:b2,bridge_name='br-int',has_traffic_filtering=True,id=9755a423-13f0-4f3a-8edd-bcc5b5190cfe,network=Network(19256da9-441e-477f-8195-1eea19c08437),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9755a423-13')
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.671 2 INFO nova.virt.libvirt.driver [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Deleting instance files /var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b_del
Oct 09 16:12:53 compute-0 nova_compute[117331]: 2025-10-09 16:12:53.671 2 INFO nova.virt.libvirt.driver [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Deletion of /var/lib/nova/instances/48c2872f-fffe-4ce0-8e59-8f5c86e5b07b_del complete
Oct 09 16:12:53 compute-0 sshd-session[141969]: Failed password for invalid user es from 134.199.199.215 port 43266 ssh2
Oct 09 16:12:54 compute-0 nova_compute[117331]: 2025-10-09 16:12:54.220 2 INFO nova.compute.manager [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Took 1.34 seconds to destroy the instance on the hypervisor.
Oct 09 16:12:54 compute-0 nova_compute[117331]: 2025-10-09 16:12:54.221 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 09 16:12:54 compute-0 nova_compute[117331]: 2025-10-09 16:12:54.221 2 DEBUG nova.compute.manager [-] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 09 16:12:54 compute-0 nova_compute[117331]: 2025-10-09 16:12:54.222 2 DEBUG nova.network.neutron [-] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 09 16:12:54 compute-0 nova_compute[117331]: 2025-10-09 16:12:54.222 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:12:54 compute-0 nova_compute[117331]: 2025-10-09 16:12:54.306 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:12:54 compute-0 sshd-session[142092]: Invalid user username from 134.199.199.215 port 43272
Oct 09 16:12:54 compute-0 sshd-session[142092]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:12:54 compute-0 sshd-session[142092]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.269 2 DEBUG nova.network.neutron [-] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:55 compute-0 sshd-session[141969]: Connection closed by invalid user es 134.199.199.215 port 43266 [preauth]
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.640 2 DEBUG nova.compute.manager [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Received event network-vif-unplugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.641 2 DEBUG oslo_concurrency.lockutils [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.641 2 DEBUG oslo_concurrency.lockutils [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.641 2 DEBUG oslo_concurrency.lockutils [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.642 2 DEBUG nova.compute.manager [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] No waiting events found dispatching network-vif-unplugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.642 2 DEBUG nova.compute.manager [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Received event network-vif-unplugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.642 2 DEBUG nova.compute.manager [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Received event network-vif-plugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.642 2 DEBUG oslo_concurrency.lockutils [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.643 2 DEBUG oslo_concurrency.lockutils [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.643 2 DEBUG oslo_concurrency.lockutils [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.643 2 DEBUG nova.compute.manager [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] No waiting events found dispatching network-vif-plugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.643 2 WARNING nova.compute.manager [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Received unexpected event network-vif-plugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe for instance with vm_state active and task_state deleting.
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.644 2 DEBUG nova.compute.manager [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Received event network-vif-plugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.644 2 DEBUG oslo_concurrency.lockutils [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.644 2 DEBUG oslo_concurrency.lockutils [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.644 2 DEBUG oslo_concurrency.lockutils [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.645 2 DEBUG nova.compute.manager [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] No waiting events found dispatching network-vif-plugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.645 2 WARNING nova.compute.manager [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Received unexpected event network-vif-plugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe for instance with vm_state active and task_state deleting.
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.645 2 DEBUG nova.compute.manager [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Received event network-vif-unplugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.645 2 DEBUG oslo_concurrency.lockutils [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.646 2 DEBUG oslo_concurrency.lockutils [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.646 2 DEBUG oslo_concurrency.lockutils [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.646 2 DEBUG nova.compute.manager [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] No waiting events found dispatching network-vif-unplugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.646 2 DEBUG nova.compute.manager [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Received event network-vif-unplugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.647 2 DEBUG nova.compute.manager [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Received event network-vif-unplugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.647 2 DEBUG oslo_concurrency.lockutils [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.647 2 DEBUG oslo_concurrency.lockutils [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.647 2 DEBUG oslo_concurrency.lockutils [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.648 2 DEBUG nova.compute.manager [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] No waiting events found dispatching network-vif-unplugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.648 2 DEBUG nova.compute.manager [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Received event network-vif-unplugged-9755a423-13f0-4f3a-8edd-bcc5b5190cfe for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.648 2 DEBUG nova.compute.manager [req-e6f3f10f-970d-453b-9d92-2114c561d68e req-fdf977a2-d0a7-4776-bfad-769fb9733649 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Received event network-vif-deleted-9755a423-13f0-4f3a-8edd-bcc5b5190cfe external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:12:55 compute-0 nova_compute[117331]: 2025-10-09 16:12:55.885 2 INFO nova.compute.manager [-] [instance: 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b] Took 1.66 seconds to deallocate network for instance.
Oct 09 16:12:55 compute-0 sshd-session[142092]: Failed password for invalid user username from 134.199.199.215 port 43272 ssh2
Oct 09 16:12:56 compute-0 nova_compute[117331]: 2025-10-09 16:12:56.620 2 DEBUG oslo_concurrency.lockutils [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:12:56 compute-0 nova_compute[117331]: 2025-10-09 16:12:56.621 2 DEBUG oslo_concurrency.lockutils [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:12:56 compute-0 nova_compute[117331]: 2025-10-09 16:12:56.665 2 DEBUG nova.compute.provider_tree [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:12:57 compute-0 sshd-session[142092]: Connection closed by invalid user username 134.199.199.215 port 43272 [preauth]
Oct 09 16:12:57 compute-0 nova_compute[117331]: 2025-10-09 16:12:57.241 2 DEBUG nova.scheduler.client.report [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:12:57 compute-0 nova_compute[117331]: 2025-10-09 16:12:57.850 2 DEBUG oslo_concurrency.lockutils [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.229s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:57 compute-0 nova_compute[117331]: 2025-10-09 16:12:57.879 2 INFO nova.scheduler.client.report [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Deleted allocations for instance 48c2872f-fffe-4ce0-8e59-8f5c86e5b07b
Oct 09 16:12:57 compute-0 sshd-session[142094]: Invalid user ubuntu from 134.199.199.215 port 55770
Oct 09 16:12:57 compute-0 sshd-session[142094]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:12:57 compute-0 sshd-session[142094]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:12:58 compute-0 nova_compute[117331]: 2025-10-09 16:12:58.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:12:58 compute-0 nova_compute[117331]: 2025-10-09 16:12:58.910 2 DEBUG oslo_concurrency.lockutils [None req-5eb84330-2455-4bd9-b020-4af0e506d676 ac3a468b64f844f9b4952888c88de098 e2eb1e7f2a0a4826a63765b30279f43c - - default default] Lock "48c2872f-fffe-4ce0-8e59-8f5c86e5b07b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.662s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:12:59 compute-0 sshd-session[142094]: Failed password for invalid user ubuntu from 134.199.199.215 port 55770 ssh2
Oct 09 16:12:59 compute-0 podman[127775]: time="2025-10-09T16:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:12:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:12:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3015 "" "Go-http-client/1.1"
Oct 09 16:13:00 compute-0 nova_compute[117331]: 2025-10-09 16:13:00.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:01 compute-0 sshd-session[142096]: Invalid user debian from 134.199.199.215 port 55784
Oct 09 16:13:01 compute-0 sshd-session[142096]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:13:01 compute-0 sshd-session[142096]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:13:01 compute-0 openstack_network_exporter[129925]: ERROR   16:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:13:01 compute-0 openstack_network_exporter[129925]: ERROR   16:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:13:01 compute-0 openstack_network_exporter[129925]: ERROR   16:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:13:01 compute-0 openstack_network_exporter[129925]: ERROR   16:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:13:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:13:01 compute-0 openstack_network_exporter[129925]: ERROR   16:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:13:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:13:01 compute-0 sshd-session[142094]: Connection closed by invalid user ubuntu 134.199.199.215 port 55770 [preauth]
Oct 09 16:13:02 compute-0 podman[142099]: 2025-10-09 16:13:02.831707331 +0000 UTC m=+0.060364739 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 09 16:13:03 compute-0 sshd-session[142096]: Failed password for invalid user debian from 134.199.199.215 port 55784 ssh2
Oct 09 16:13:03 compute-0 nova_compute[117331]: 2025-10-09 16:13:03.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:04 compute-0 sshd-session[142121]: Invalid user esuser from 134.199.199.215 port 55788
Oct 09 16:13:04 compute-0 sshd-session[142121]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:13:04 compute-0 sshd-session[142121]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:13:04 compute-0 sshd-session[142096]: Connection closed by invalid user debian 134.199.199.215 port 55784 [preauth]
Oct 09 16:13:05 compute-0 nova_compute[117331]: 2025-10-09 16:13:05.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:05 compute-0 podman[142123]: 2025-10-09 16:13:05.844181763 +0000 UTC m=+0.077785855 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Oct 09 16:13:06 compute-0 nova_compute[117331]: 2025-10-09 16:13:06.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:13:06 compute-0 sshd-session[142121]: Failed password for invalid user esuser from 134.199.199.215 port 55788 ssh2
Oct 09 16:13:07 compute-0 nova_compute[117331]: 2025-10-09 16:13:07.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:13:07 compute-0 nova_compute[117331]: 2025-10-09 16:13:07.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:13:07 compute-0 nova_compute[117331]: 2025-10-09 16:13:07.306 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:13:07 compute-0 sshd-session[142121]: Connection closed by invalid user esuser 134.199.199.215 port 55788 [preauth]
Oct 09 16:13:07 compute-0 sshd-session[142149]: Invalid user www from 134.199.199.215 port 42396
Oct 09 16:13:07 compute-0 sshd-session[142149]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:13:07 compute-0 sshd-session[142149]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:13:08 compute-0 nova_compute[117331]: 2025-10-09 16:13:08.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:13:08 compute-0 nova_compute[117331]: 2025-10-09 16:13:08.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:08 compute-0 nova_compute[117331]: 2025-10-09 16:13:08.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:13:08 compute-0 nova_compute[117331]: 2025-10-09 16:13:08.821 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:13:08 compute-0 nova_compute[117331]: 2025-10-09 16:13:08.821 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:13:08 compute-0 nova_compute[117331]: 2025-10-09 16:13:08.821 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:13:08 compute-0 nova_compute[117331]: 2025-10-09 16:13:08.948 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:13:08 compute-0 nova_compute[117331]: 2025-10-09 16:13:08.950 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:13:08 compute-0 nova_compute[117331]: 2025-10-09 16:13:08.967 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.017s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:13:08 compute-0 nova_compute[117331]: 2025-10-09 16:13:08.967 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6175MB free_disk=73.27001571655273GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:13:08 compute-0 nova_compute[117331]: 2025-10-09 16:13:08.967 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:13:08 compute-0 nova_compute[117331]: 2025-10-09 16:13:08.968 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:13:09 compute-0 sshd-session[142149]: Failed password for invalid user www from 134.199.199.215 port 42396 ssh2
Oct 09 16:13:10 compute-0 nova_compute[117331]: 2025-10-09 16:13:10.066 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:13:10 compute-0 nova_compute[117331]: 2025-10-09 16:13:10.066 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:13:08 up 22 min,  0 user,  load average: 0.28, 0.28, 0.32\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:13:10 compute-0 nova_compute[117331]: 2025-10-09 16:13:10.107 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:13:10 compute-0 sshd-session[142149]: Connection closed by invalid user www 134.199.199.215 port 42396 [preauth]
Oct 09 16:13:10 compute-0 nova_compute[117331]: 2025-10-09 16:13:10.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:10 compute-0 nova_compute[117331]: 2025-10-09 16:13:10.614 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:13:10 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:42410 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:13:11 compute-0 nova_compute[117331]: 2025-10-09 16:13:11.121 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:13:11 compute-0 nova_compute[117331]: 2025-10-09 16:13:11.122 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.154s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:13:12 compute-0 nova_compute[117331]: 2025-10-09 16:13:12.117 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:13:12 compute-0 nova_compute[117331]: 2025-10-09 16:13:12.117 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:13:12 compute-0 nova_compute[117331]: 2025-10-09 16:13:12.117 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:13:12 compute-0 nova_compute[117331]: 2025-10-09 16:13:12.118 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:13:12 compute-0 nova_compute[117331]: 2025-10-09 16:13:12.987 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:13:13 compute-0 nova_compute[117331]: 2025-10-09 16:13:13.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:14 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:42418 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:13:14 compute-0 podman[142153]: 2025-10-09 16:13:14.827593574 +0000 UTC m=+0.065977238 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:13:15 compute-0 nova_compute[117331]: 2025-10-09 16:13:15.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:17 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:45446 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:13:18 compute-0 nova_compute[117331]: 2025-10-09 16:13:18.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:18 compute-0 nova_compute[117331]: 2025-10-09 16:13:18.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:19 compute-0 podman[142174]: 2025-10-09 16:13:19.812177279 +0000 UTC m=+0.047119157 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 09 16:13:20 compute-0 nova_compute[117331]: 2025-10-09 16:13:20.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:21 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:45448 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:13:23 compute-0 nova_compute[117331]: 2025-10-09 16:13:23.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:23 compute-0 podman[142198]: 2025-10-09 16:13:23.827660598 +0000 UTC m=+0.061418382 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 09 16:13:23 compute-0 podman[142199]: 2025-10-09 16:13:23.848244085 +0000 UTC m=+0.078230468 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 09 16:13:24 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:45458 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:13:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:13:24.905 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:13:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:13:24.905 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:13:24 compute-0 nova_compute[117331]: 2025-10-09 16:13:24.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:25 compute-0 nova_compute[117331]: 2025-10-09 16:13:25.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:26 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:13:26.907 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:13:27 compute-0 sshd-session[142238]: Invalid user administrator from 134.199.199.215 port 40016
Oct 09 16:13:27 compute-0 sshd-session[142238]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:13:27 compute-0 sshd-session[142238]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:13:28 compute-0 nova_compute[117331]: 2025-10-09 16:13:28.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:29 compute-0 sshd-session[142238]: Failed password for invalid user administrator from 134.199.199.215 port 40016 ssh2
Oct 09 16:13:29 compute-0 podman[127775]: time="2025-10-09T16:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:13:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:13:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3019 "" "Go-http-client/1.1"
Oct 09 16:13:30 compute-0 sshd-session[142238]: Connection closed by invalid user administrator 134.199.199.215 port 40016 [preauth]
Oct 09 16:13:30 compute-0 nova_compute[117331]: 2025-10-09 16:13:30.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:31 compute-0 unix_chkpwd[142242]: password check failed for user (root)
Oct 09 16:13:31 compute-0 sshd-session[142240]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:13:31 compute-0 openstack_network_exporter[129925]: ERROR   16:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:13:31 compute-0 openstack_network_exporter[129925]: ERROR   16:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:13:31 compute-0 openstack_network_exporter[129925]: ERROR   16:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:13:31 compute-0 openstack_network_exporter[129925]: ERROR   16:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:13:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:13:31 compute-0 openstack_network_exporter[129925]: ERROR   16:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:13:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:13:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:13:31.915 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:ef:1e 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '153d68ab33e64b958060574bb1741725', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8754009b-0d8f-4d41-af4c-a82e7b57a4f6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=fa7fd37b-f1db-4203-966c-06eaa6fa3892) old=Port_Binding(mac=['fa:16:3e:11:ef:1e'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '153d68ab33e64b958060574bb1741725', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:13:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:13:31.916 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port fa7fd37b-f1db-4203-966c-06eaa6fa3892 in datapath 7ed4244a-b510-4df6-9ffd-2f86603932fc updated
Oct 09 16:13:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:13:31.918 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7ed4244a-b510-4df6-9ffd-2f86603932fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:13:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:13:31.919 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[79adab1f-d656-4ba5-8ae2-7b8c85fab366]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:13:32 compute-0 sshd-session[142240]: Failed password for root from 134.199.199.215 port 40048 ssh2
Oct 09 16:13:33 compute-0 sshd-session[142240]: Connection closed by authenticating user root 134.199.199.215 port 40048 [preauth]
Oct 09 16:13:33 compute-0 nova_compute[117331]: 2025-10-09 16:13:33.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:33 compute-0 podman[142243]: 2025-10-09 16:13:33.844181288 +0000 UTC m=+0.080928155 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vendor=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, maintainer=Red Hat, Inc.)
Oct 09 16:13:34 compute-0 sshd-session[142265]: Invalid user guest from 134.199.199.215 port 40074
Oct 09 16:13:34 compute-0 sshd-session[142265]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:13:34 compute-0 sshd-session[142265]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:13:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:13:35.286 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:13:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:13:35.286 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:13:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:13:35.286 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:13:35 compute-0 nova_compute[117331]: 2025-10-09 16:13:35.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:36 compute-0 sshd-session[142265]: Failed password for invalid user guest from 134.199.199.215 port 40074 ssh2
Oct 09 16:13:36 compute-0 podman[142268]: 2025-10-09 16:13:36.866440243 +0000 UTC m=+0.092637519 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:13:37 compute-0 sshd-session[142294]: Invalid user git from 134.199.199.215 port 34586
Oct 09 16:13:37 compute-0 sshd-session[142294]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:13:37 compute-0 sshd-session[142294]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:13:38 compute-0 sshd-session[142265]: Connection closed by invalid user guest 134.199.199.215 port 40074 [preauth]
Oct 09 16:13:38 compute-0 nova_compute[117331]: 2025-10-09 16:13:38.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:39 compute-0 sshd-session[142294]: Failed password for invalid user git from 134.199.199.215 port 34586 ssh2
Oct 09 16:13:40 compute-0 nova_compute[117331]: 2025-10-09 16:13:40.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:41 compute-0 sshd-session[142296]: Invalid user nginx from 134.199.199.215 port 34602
Oct 09 16:13:41 compute-0 sshd-session[142296]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:13:41 compute-0 sshd-session[142296]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:13:41 compute-0 sshd-session[142294]: Connection closed by invalid user git 134.199.199.215 port 34586 [preauth]
Oct 09 16:13:43 compute-0 sshd-session[142296]: Failed password for invalid user nginx from 134.199.199.215 port 34602 ssh2
Oct 09 16:13:43 compute-0 nova_compute[117331]: 2025-10-09 16:13:43.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:43 compute-0 sshd-session[142296]: Connection closed by invalid user nginx 134.199.199.215 port 34602 [preauth]
Oct 09 16:13:44 compute-0 sshd-session[142298]: Invalid user weblogic from 134.199.199.215 port 34616
Oct 09 16:13:44 compute-0 sshd-session[142298]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:13:44 compute-0 sshd-session[142298]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:13:45 compute-0 nova_compute[117331]: 2025-10-09 16:13:45.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:45 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:13:45.571 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:fe:7b 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-ead89f15-cab8-4792-ba85-3747e283d6ac', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ead89f15-cab8-4792-ba85-3747e283d6ac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be4e2f9059cd48f5b44a612256e3fc7b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99a6ac86-179d-4c86-adfd-2cecaa4e5e40, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=59decbcf-4c23-4bf4-9d33-96942eb4d368) old=Port_Binding(mac=['fa:16:3e:50:fe:7b'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-ead89f15-cab8-4792-ba85-3747e283d6ac', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ead89f15-cab8-4792-ba85-3747e283d6ac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be4e2f9059cd48f5b44a612256e3fc7b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:13:45 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:13:45.571 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 59decbcf-4c23-4bf4-9d33-96942eb4d368 in datapath ead89f15-cab8-4792-ba85-3747e283d6ac updated
Oct 09 16:13:45 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:13:45.572 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ead89f15-cab8-4792-ba85-3747e283d6ac, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:13:45 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:13:45.573 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[091451b2-38ff-4f81-abd2-026aa4e1f82f]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:13:45 compute-0 podman[142300]: 2025-10-09 16:13:45.819153234 +0000 UTC m=+0.051340591 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=watcher_latest, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:13:46 compute-0 sshd-session[142298]: Failed password for invalid user weblogic from 134.199.199.215 port 34616 ssh2
Oct 09 16:13:46 compute-0 sshd-session[142298]: Connection closed by invalid user weblogic 134.199.199.215 port 34616 [preauth]
Oct 09 16:13:47 compute-0 sshd-session[142320]: Invalid user bigdata from 134.199.199.215 port 50688
Oct 09 16:13:47 compute-0 sshd-session[142320]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:13:47 compute-0 sshd-session[142320]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:13:48 compute-0 nova_compute[117331]: 2025-10-09 16:13:48.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:49 compute-0 sshd-session[142320]: Failed password for invalid user bigdata from 134.199.199.215 port 50688 ssh2
Oct 09 16:13:50 compute-0 nova_compute[117331]: 2025-10-09 16:13:50.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:50 compute-0 podman[142322]: 2025-10-09 16:13:50.84226404 +0000 UTC m=+0.058093256 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 09 16:13:51 compute-0 sshd-session[142320]: Connection closed by invalid user bigdata 134.199.199.215 port 50688 [preauth]
Oct 09 16:13:51 compute-0 sshd-session[142346]: Invalid user mysql from 134.199.199.215 port 50710
Oct 09 16:13:51 compute-0 sshd-session[142346]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:13:51 compute-0 sshd-session[142346]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:13:52 compute-0 sshd-session[142346]: Failed password for invalid user mysql from 134.199.199.215 port 50710 ssh2
Oct 09 16:13:53 compute-0 sshd-session[142346]: Connection closed by invalid user mysql 134.199.199.215 port 50710 [preauth]
Oct 09 16:13:53 compute-0 nova_compute[117331]: 2025-10-09 16:13:53.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:54 compute-0 ovn_controller[19752]: 2025-10-09T16:13:54Z|00066|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 09 16:13:54 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:50716 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:13:54 compute-0 podman[142349]: 2025-10-09 16:13:54.81909149 +0000 UTC m=+0.053456476 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251007, container_name=iscsid, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:13:54 compute-0 podman[142348]: 2025-10-09 16:13:54.840881454 +0000 UTC m=+0.078676889 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251007, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Oct 09 16:13:55 compute-0 nova_compute[117331]: 2025-10-09 16:13:55.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:57 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:55080 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:13:58 compute-0 nova_compute[117331]: 2025-10-09 16:13:58.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:13:59 compute-0 podman[127775]: time="2025-10-09T16:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:13:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:13:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3017 "" "Go-http-client/1.1"
Oct 09 16:14:00 compute-0 nova_compute[117331]: 2025-10-09 16:14:00.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:01 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:55106 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:14:01 compute-0 openstack_network_exporter[129925]: ERROR   16:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:14:01 compute-0 openstack_network_exporter[129925]: ERROR   16:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:14:01 compute-0 openstack_network_exporter[129925]: ERROR   16:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:14:01 compute-0 openstack_network_exporter[129925]: ERROR   16:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:14:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:14:01 compute-0 openstack_network_exporter[129925]: ERROR   16:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:14:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:14:03 compute-0 nova_compute[117331]: 2025-10-09 16:14:03.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:04 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:55120 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:14:04 compute-0 podman[142388]: 2025-10-09 16:14:04.819124431 +0000 UTC m=+0.049974684 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc.)
Oct 09 16:14:05 compute-0 nova_compute[117331]: 2025-10-09 16:14:05.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:07 compute-0 nova_compute[117331]: 2025-10-09 16:14:07.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:14:07 compute-0 podman[142411]: 2025-10-09 16:14:07.896231561 +0000 UTC m=+0.107728245 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007)
Oct 09 16:14:07 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:52706 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:14:08 compute-0 nova_compute[117331]: 2025-10-09 16:14:08.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:14:08 compute-0 nova_compute[117331]: 2025-10-09 16:14:08.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:09 compute-0 nova_compute[117331]: 2025-10-09 16:14:09.286 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "481b51b1-c134-4c7a-af94-d3d3794a7971" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:14:09 compute-0 nova_compute[117331]: 2025-10-09 16:14:09.286 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:14:09 compute-0 nova_compute[117331]: 2025-10-09 16:14:09.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:14:09 compute-0 nova_compute[117331]: 2025-10-09 16:14:09.306 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:14:09 compute-0 nova_compute[117331]: 2025-10-09 16:14:09.802 2 DEBUG nova.compute.manager [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:14:10 compute-0 nova_compute[117331]: 2025-10-09 16:14:10.302 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:14:10 compute-0 nova_compute[117331]: 2025-10-09 16:14:10.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:14:10 compute-0 nova_compute[117331]: 2025-10-09 16:14:10.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:14:10 compute-0 nova_compute[117331]: 2025-10-09 16:14:10.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:14:10 compute-0 nova_compute[117331]: 2025-10-09 16:14:10.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:10 compute-0 nova_compute[117331]: 2025-10-09 16:14:10.631 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:14:10 compute-0 nova_compute[117331]: 2025-10-09 16:14:10.632 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:14:10 compute-0 nova_compute[117331]: 2025-10-09 16:14:10.637 2 DEBUG nova.virt.hardware [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:14:10 compute-0 nova_compute[117331]: 2025-10-09 16:14:10.637 2 INFO nova.compute.claims [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:14:10 compute-0 nova_compute[117331]: 2025-10-09 16:14:10.898 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:14:11 compute-0 sshd-session[142439]: Invalid user esearch from 134.199.199.215 port 52714
Oct 09 16:14:11 compute-0 sshd-session[142439]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:14:11 compute-0 sshd-session[142439]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:14:11 compute-0 nova_compute[117331]: 2025-10-09 16:14:11.798 2 DEBUG nova.compute.provider_tree [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:14:12 compute-0 nova_compute[117331]: 2025-10-09 16:14:12.307 2 DEBUG nova.scheduler.client.report [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:14:12 compute-0 nova_compute[117331]: 2025-10-09 16:14:12.820 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.189s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:14:12 compute-0 nova_compute[117331]: 2025-10-09 16:14:12.822 2 DEBUG nova.compute.manager [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:14:12 compute-0 nova_compute[117331]: 2025-10-09 16:14:12.826 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 1.928s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:14:12 compute-0 nova_compute[117331]: 2025-10-09 16:14:12.827 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:14:12 compute-0 nova_compute[117331]: 2025-10-09 16:14:12.827 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:14:12 compute-0 nova_compute[117331]: 2025-10-09 16:14:12.995 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:14:12 compute-0 nova_compute[117331]: 2025-10-09 16:14:12.996 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:14:13 compute-0 nova_compute[117331]: 2025-10-09 16:14:13.016 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.020s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:14:13 compute-0 nova_compute[117331]: 2025-10-09 16:14:13.017 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6171MB free_disk=73.26998901367188GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:14:13 compute-0 nova_compute[117331]: 2025-10-09 16:14:13.018 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:14:13 compute-0 nova_compute[117331]: 2025-10-09 16:14:13.018 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:14:13 compute-0 nova_compute[117331]: 2025-10-09 16:14:13.336 2 DEBUG nova.compute.manager [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:14:13 compute-0 nova_compute[117331]: 2025-10-09 16:14:13.337 2 DEBUG nova.network.neutron [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:14:13 compute-0 nova_compute[117331]: 2025-10-09 16:14:13.337 2 WARNING neutronclient.v2_0.client [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:14:13 compute-0 nova_compute[117331]: 2025-10-09 16:14:13.337 2 WARNING neutronclient.v2_0.client [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:14:13 compute-0 nova_compute[117331]: 2025-10-09 16:14:13.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:13 compute-0 nova_compute[117331]: 2025-10-09 16:14:13.873 2 INFO nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:14:14 compute-0 nova_compute[117331]: 2025-10-09 16:14:14.056 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance 481b51b1-c134-4c7a-af94-d3d3794a7971 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:14:14 compute-0 nova_compute[117331]: 2025-10-09 16:14:14.056 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:14:14 compute-0 nova_compute[117331]: 2025-10-09 16:14:14.056 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:14:13 up 23 min,  0 user,  load average: 0.09, 0.22, 0.29\n', 'num_instances': '1', 'num_vm_building': '1', 'num_task_networking': '1', 'num_os_type_None': '1', 'num_proj_be4e2f9059cd48f5b44a612256e3fc7b': '1', 'io_workload': '1'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:14:14 compute-0 sshd-session[142439]: Failed password for invalid user esearch from 134.199.199.215 port 52714 ssh2
Oct 09 16:14:14 compute-0 nova_compute[117331]: 2025-10-09 16:14:14.098 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:14:14 compute-0 nova_compute[117331]: 2025-10-09 16:14:14.227 2 DEBUG nova.network.neutron [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Successfully created port: dc640852-4b77-4312-9033-7ea4a70836d4 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:14:14 compute-0 nova_compute[117331]: 2025-10-09 16:14:14.383 2 DEBUG nova.compute.manager [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:14:14 compute-0 nova_compute[117331]: 2025-10-09 16:14:14.616 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:14:14 compute-0 sshd-session[142442]: Invalid user user1 from 134.199.199.215 port 52730
Oct 09 16:14:14 compute-0 sshd-session[142442]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:14:14 compute-0 sshd-session[142442]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:14:14 compute-0 nova_compute[117331]: 2025-10-09 16:14:14.990 2 DEBUG nova.network.neutron [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Successfully updated port: dc640852-4b77-4312-9033-7ea4a70836d4 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.047 2 DEBUG nova.compute.manager [req-fc023fb5-016c-4d4a-b8ec-6155fe80b925 req-ded81450-31fe-4749-b9ab-b099e4a41761 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Received event network-changed-dc640852-4b77-4312-9033-7ea4a70836d4 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.047 2 DEBUG nova.compute.manager [req-fc023fb5-016c-4d4a-b8ec-6155fe80b925 req-ded81450-31fe-4749-b9ab-b099e4a41761 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Refreshing instance network info cache due to event network-changed-dc640852-4b77-4312-9033-7ea4a70836d4. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.048 2 DEBUG oslo_concurrency.lockutils [req-fc023fb5-016c-4d4a-b8ec-6155fe80b925 req-ded81450-31fe-4749-b9ab-b099e4a41761 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-481b51b1-c134-4c7a-af94-d3d3794a7971" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.048 2 DEBUG oslo_concurrency.lockutils [req-fc023fb5-016c-4d4a-b8ec-6155fe80b925 req-ded81450-31fe-4749-b9ab-b099e4a41761 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-481b51b1-c134-4c7a-af94-d3d3794a7971" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.048 2 DEBUG nova.network.neutron [req-fc023fb5-016c-4d4a-b8ec-6155fe80b925 req-ded81450-31fe-4749-b9ab-b099e4a41761 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Refreshing network info cache for port dc640852-4b77-4312-9033-7ea4a70836d4 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.123 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.123 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.105s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.399 2 DEBUG nova.compute.manager [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.400 2 DEBUG nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.400 2 INFO nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Creating image(s)
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.401 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "/var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.401 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "/var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.401 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "/var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.402 2 DEBUG oslo_utils.imageutils.format_inspector [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.405 2 DEBUG oslo_utils.imageutils.format_inspector [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.408 2 DEBUG oslo_concurrency.processutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.463 2 DEBUG oslo_concurrency.processutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.465 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.466 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.467 2 DEBUG oslo_utils.imageutils.format_inspector [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.473 2 DEBUG oslo_utils.imageutils.format_inspector [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.474 2 DEBUG oslo_concurrency.processutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.495 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "refresh_cache-481b51b1-c134-4c7a-af94-d3d3794a7971" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.531 2 DEBUG oslo_concurrency.processutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.531 2 DEBUG oslo_concurrency.processutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.554 2 WARNING neutronclient.v2_0.client [req-fc023fb5-016c-4d4a-b8ec-6155fe80b925 req-ded81450-31fe-4749-b9ab-b099e4a41761 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.572 2 DEBUG oslo_concurrency.processutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk 1073741824" returned: 0 in 0.040s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.572 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.106s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.573 2 DEBUG oslo_concurrency.processutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.623 2 DEBUG oslo_concurrency.processutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.624 2 DEBUG nova.virt.disk.api [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Checking if we can resize image /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.625 2 DEBUG oslo_concurrency.processutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.636 2 DEBUG nova.network.neutron [req-fc023fb5-016c-4d4a-b8ec-6155fe80b925 req-ded81450-31fe-4749-b9ab-b099e4a41761 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.680 2 DEBUG oslo_concurrency.processutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.680 2 DEBUG nova.virt.disk.api [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Cannot resize image /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.681 2 DEBUG nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.681 2 DEBUG nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Ensure instance console log exists: /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.681 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.681 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.682 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:14:15 compute-0 nova_compute[117331]: 2025-10-09 16:14:15.762 2 DEBUG nova.network.neutron [req-fc023fb5-016c-4d4a-b8ec-6155fe80b925 req-ded81450-31fe-4749-b9ab-b099e4a41761 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:14:16 compute-0 sshd-session[142439]: Connection closed by invalid user esearch 134.199.199.215 port 52714 [preauth]
Oct 09 16:14:16 compute-0 nova_compute[117331]: 2025-10-09 16:14:16.268 2 DEBUG oslo_concurrency.lockutils [req-fc023fb5-016c-4d4a-b8ec-6155fe80b925 req-ded81450-31fe-4749-b9ab-b099e4a41761 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-481b51b1-c134-4c7a-af94-d3d3794a7971" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:14:16 compute-0 nova_compute[117331]: 2025-10-09 16:14:16.269 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquired lock "refresh_cache-481b51b1-c134-4c7a-af94-d3d3794a7971" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:14:16 compute-0 nova_compute[117331]: 2025-10-09 16:14:16.269 2 DEBUG nova.network.neutron [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:14:16 compute-0 sshd-session[142442]: Failed password for invalid user user1 from 134.199.199.215 port 52730 ssh2
Oct 09 16:14:16 compute-0 podman[142459]: 2025-10-09 16:14:16.816210363 +0000 UTC m=+0.055064696 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 09 16:14:17 compute-0 nova_compute[117331]: 2025-10-09 16:14:17.672 2 DEBUG nova.network.neutron [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:14:17 compute-0 sshd-session[142442]: Connection closed by invalid user user1 134.199.199.215 port 52730 [preauth]
Oct 09 16:14:18 compute-0 nova_compute[117331]: 2025-10-09 16:14:18.124 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:14:18 compute-0 unix_chkpwd[142482]: password check failed for user (root)
Oct 09 16:14:18 compute-0 sshd-session[142480]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:14:18 compute-0 nova_compute[117331]: 2025-10-09 16:14:18.635 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:14:18 compute-0 nova_compute[117331]: 2025-10-09 16:14:18.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:18 compute-0 nova_compute[117331]: 2025-10-09 16:14:18.725 2 WARNING neutronclient.v2_0.client [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:14:18 compute-0 nova_compute[117331]: 2025-10-09 16:14:18.855 2 DEBUG nova.network.neutron [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Updating instance_info_cache with network_info: [{"id": "dc640852-4b77-4312-9033-7ea4a70836d4", "address": "fa:16:3e:79:a2:c5", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc640852-4b", "ovs_interfaceid": "dc640852-4b77-4312-9033-7ea4a70836d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.362 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Releasing lock "refresh_cache-481b51b1-c134-4c7a-af94-d3d3794a7971" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.363 2 DEBUG nova.compute.manager [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Instance network_info: |[{"id": "dc640852-4b77-4312-9033-7ea4a70836d4", "address": "fa:16:3e:79:a2:c5", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc640852-4b", "ovs_interfaceid": "dc640852-4b77-4312-9033-7ea4a70836d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.365 2 DEBUG nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Start _get_guest_xml network_info=[{"id": "dc640852-4b77-4312-9033-7ea4a70836d4", "address": "fa:16:3e:79:a2:c5", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc640852-4b", "ovs_interfaceid": "dc640852-4b77-4312-9033-7ea4a70836d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.369 2 WARNING nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.371 2 DEBUG nova.virt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteActionsViaActuator-server-1479250465', uuid='481b51b1-c134-4c7a-af94-d3d3794a7971'), owner=OwnerMeta(userid='e5044998ddc3419bb14cc08417add581', username='tempest-TestExecuteActionsViaActuator-1347788182-project-admin', projectid='be4e2f9059cd48f5b44a612256e3fc7b', projectname='tempest-TestExecuteActionsViaActuator-1347788182'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "dc640852-4b77-4312-9033-7ea4a70836d4", "address": "fa:16:3e:79:a2:c5", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc640852-4b", "ovs_interfaceid": "dc640852-4b77-4312-9033-7ea4a70836d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760026459.3709512) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.376 2 DEBUG nova.virt.libvirt.host [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.378 2 DEBUG nova.virt.libvirt.host [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.382 2 DEBUG nova.virt.libvirt.host [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.383 2 DEBUG nova.virt.libvirt.host [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.384 2 DEBUG nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.384 2 DEBUG nova.virt.hardware [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.385 2 DEBUG nova.virt.hardware [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.386 2 DEBUG nova.virt.hardware [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.386 2 DEBUG nova.virt.hardware [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.386 2 DEBUG nova.virt.hardware [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.387 2 DEBUG nova.virt.hardware [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.387 2 DEBUG nova.virt.hardware [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.387 2 DEBUG nova.virt.hardware [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.387 2 DEBUG nova.virt.hardware [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.388 2 DEBUG nova.virt.hardware [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.388 2 DEBUG nova.virt.hardware [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.393 2 DEBUG nova.virt.libvirt.vif [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1479250465',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1479250465',id=4,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be4e2f9059cd48f5b44a612256e3fc7b',ramdisk_id='',reservation_id='r-7c00rued',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-1347788182',owner_user_name='tempest-TestExecuteActionsViaActuator-1347788182-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:14:14Z,user_data=None,user_id='e5044998ddc3419bb14cc08417add581',uuid=481b51b1-c134-4c7a-af94-d3d3794a7971,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc640852-4b77-4312-9033-7ea4a70836d4", "address": "fa:16:3e:79:a2:c5", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc640852-4b", "ovs_interfaceid": "dc640852-4b77-4312-9033-7ea4a70836d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.393 2 DEBUG nova.network.os_vif_util [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Converting VIF {"id": "dc640852-4b77-4312-9033-7ea4a70836d4", "address": "fa:16:3e:79:a2:c5", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc640852-4b", "ovs_interfaceid": "dc640852-4b77-4312-9033-7ea4a70836d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.394 2 DEBUG nova.network.os_vif_util [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:a2:c5,bridge_name='br-int',has_traffic_filtering=True,id=dc640852-4b77-4312-9033-7ea4a70836d4,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc640852-4b') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.395 2 DEBUG nova.objects.instance [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lazy-loading 'pci_devices' on Instance uuid 481b51b1-c134-4c7a-af94-d3d3794a7971 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.902 2 DEBUG nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:14:19 compute-0 nova_compute[117331]:   <uuid>481b51b1-c134-4c7a-af94-d3d3794a7971</uuid>
Oct 09 16:14:19 compute-0 nova_compute[117331]:   <name>instance-00000004</name>
Oct 09 16:14:19 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:14:19 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:14:19 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteActionsViaActuator-server-1479250465</nova:name>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:14:19</nova:creationTime>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:14:19 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:14:19 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:14:19 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:14:19 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:14:19 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:14:19 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:14:19 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:14:19 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:14:19 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:14:19 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:14:19 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:14:19 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:14:19 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:14:19 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:14:19 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:14:19 compute-0 nova_compute[117331]:         <nova:user uuid="e5044998ddc3419bb14cc08417add581">tempest-TestExecuteActionsViaActuator-1347788182-project-admin</nova:user>
Oct 09 16:14:19 compute-0 nova_compute[117331]:         <nova:project uuid="be4e2f9059cd48f5b44a612256e3fc7b">tempest-TestExecuteActionsViaActuator-1347788182</nova:project>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:14:19 compute-0 nova_compute[117331]:         <nova:port uuid="dc640852-4b77-4312-9033-7ea4a70836d4">
Oct 09 16:14:19 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:14:19 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:14:19 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <system>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <entry name="serial">481b51b1-c134-4c7a-af94-d3d3794a7971</entry>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <entry name="uuid">481b51b1-c134-4c7a-af94-d3d3794a7971</entry>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     </system>
Oct 09 16:14:19 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:14:19 compute-0 nova_compute[117331]:   <os>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:   </os>
Oct 09 16:14:19 compute-0 nova_compute[117331]:   <features>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:   </features>
Oct 09 16:14:19 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:14:19 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:14:19 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk.config"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:79:a2:c5"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <target dev="tapdc640852-4b"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/console.log" append="off"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <video>
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     </video>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:14:19 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:14:19 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:14:19 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:14:19 compute-0 nova_compute[117331]: </domain>
Oct 09 16:14:19 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.903 2 DEBUG nova.compute.manager [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Preparing to wait for external event network-vif-plugged-dc640852-4b77-4312-9033-7ea4a70836d4 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.904 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.904 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.904 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.905 2 DEBUG nova.virt.libvirt.vif [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1479250465',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1479250465',id=4,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be4e2f9059cd48f5b44a612256e3fc7b',ramdisk_id='',reservation_id='r-7c00rued',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-1347788182',owner_user_name='tempest-TestExecuteActionsViaActuator-1347788182-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:14:14Z,user_data=None,user_id='e5044998ddc3419bb14cc08417add581',uuid=481b51b1-c134-4c7a-af94-d3d3794a7971,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc640852-4b77-4312-9033-7ea4a70836d4", "address": "fa:16:3e:79:a2:c5", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc640852-4b", "ovs_interfaceid": "dc640852-4b77-4312-9033-7ea4a70836d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.905 2 DEBUG nova.network.os_vif_util [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Converting VIF {"id": "dc640852-4b77-4312-9033-7ea4a70836d4", "address": "fa:16:3e:79:a2:c5", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc640852-4b", "ovs_interfaceid": "dc640852-4b77-4312-9033-7ea4a70836d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.905 2 DEBUG nova.network.os_vif_util [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:a2:c5,bridge_name='br-int',has_traffic_filtering=True,id=dc640852-4b77-4312-9033-7ea4a70836d4,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc640852-4b') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.906 2 DEBUG os_vif [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:a2:c5,bridge_name='br-int',has_traffic_filtering=True,id=dc640852-4b77-4312-9033-7ea4a70836d4,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc640852-4b') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.906 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.907 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.907 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '4be20c1b-e515-524e-adf4-bc44a115add7', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.913 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc640852-4b, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.913 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapdc640852-4b, col_values=(('qos', UUID('b64ef92e-d9ef-4b7a-b42b-a6bcde7e0e26')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.914 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapdc640852-4b, col_values=(('external_ids', {'iface-id': 'dc640852-4b77-4312-9033-7ea4a70836d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:79:a2:c5', 'vm-uuid': '481b51b1-c134-4c7a-af94-d3d3794a7971'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:14:19 compute-0 NetworkManager[1028]: <info>  [1760026459.9160] manager: (tapdc640852-4b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:19 compute-0 nova_compute[117331]: 2025-10-09 16:14:19.923 2 INFO os_vif [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:a2:c5,bridge_name='br-int',has_traffic_filtering=True,id=dc640852-4b77-4312-9033-7ea4a70836d4,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc640852-4b')
Oct 09 16:14:20 compute-0 sshd-session[142480]: Failed password for root from 134.199.199.215 port 42984 ssh2
Oct 09 16:14:20 compute-0 sshd-session[142480]: Connection closed by authenticating user root 134.199.199.215 port 42984 [preauth]
Oct 09 16:14:20 compute-0 nova_compute[117331]: 2025-10-09 16:14:20.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:21 compute-0 nova_compute[117331]: 2025-10-09 16:14:21.457 2 DEBUG nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:14:21 compute-0 nova_compute[117331]: 2025-10-09 16:14:21.457 2 DEBUG nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:14:21 compute-0 nova_compute[117331]: 2025-10-09 16:14:21.458 2 DEBUG nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] No VIF found with MAC fa:16:3e:79:a2:c5, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:14:21 compute-0 nova_compute[117331]: 2025-10-09 16:14:21.458 2 INFO nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Using config drive
Oct 09 16:14:21 compute-0 unix_chkpwd[142487]: password check failed for user (root)
Oct 09 16:14:21 compute-0 sshd-session[142485]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:14:21 compute-0 podman[142488]: 2025-10-09 16:14:21.807264373 +0000 UTC m=+0.042206787 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 09 16:14:21 compute-0 nova_compute[117331]: 2025-10-09 16:14:21.968 2 WARNING neutronclient.v2_0.client [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:14:22 compute-0 nova_compute[117331]: 2025-10-09 16:14:22.254 2 INFO nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Creating config drive at /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk.config
Oct 09 16:14:22 compute-0 nova_compute[117331]: 2025-10-09 16:14:22.258 2 DEBUG oslo_concurrency.processutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpnm5hbi8y execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:14:22 compute-0 nova_compute[117331]: 2025-10-09 16:14:22.381 2 DEBUG oslo_concurrency.processutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpnm5hbi8y" returned: 0 in 0.123s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:14:22 compute-0 kernel: tapdc640852-4b: entered promiscuous mode
Oct 09 16:14:22 compute-0 NetworkManager[1028]: <info>  [1760026462.4419] manager: (tapdc640852-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Oct 09 16:14:22 compute-0 ovn_controller[19752]: 2025-10-09T16:14:22Z|00067|binding|INFO|Claiming lport dc640852-4b77-4312-9033-7ea4a70836d4 for this chassis.
Oct 09 16:14:22 compute-0 ovn_controller[19752]: 2025-10-09T16:14:22Z|00068|binding|INFO|dc640852-4b77-4312-9033-7ea4a70836d4: Claiming fa:16:3e:79:a2:c5 10.100.0.7
Oct 09 16:14:22 compute-0 nova_compute[117331]: 2025-10-09 16:14:22.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:22 compute-0 nova_compute[117331]: 2025-10-09 16:14:22.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.462 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:a2:c5 10.100.0.7'], port_security=['fa:16:3e:79:a2:c5 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '481b51b1-c134-4c7a-af94-d3d3794a7971', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be4e2f9059cd48f5b44a612256e3fc7b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '92d94554-4438-4b2d-9e48-eb178205b5a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8754009b-0d8f-4d41-af4c-a82e7b57a4f6, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=dc640852-4b77-4312-9033-7ea4a70836d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.463 28613 INFO neutron.agent.ovn.metadata.agent [-] Port dc640852-4b77-4312-9033-7ea4a70836d4 in datapath 7ed4244a-b510-4df6-9ffd-2f86603932fc bound to our chassis
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.464 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ed4244a-b510-4df6-9ffd-2f86603932fc
Oct 09 16:14:22 compute-0 systemd-udevd[142530]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.474 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[9e3c4332-f6f5-4abe-9018-5ec34e7cf30b]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.475 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7ed4244a-b1 in ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.477 139687 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7ed4244a-b0 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.477 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[39ece710-60a3-422d-bd7a-4a0a2e72973f]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.477 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[31a7b1c2-4829-4575-99e2-53d8cb81c912]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:14:22 compute-0 NetworkManager[1028]: <info>  [1760026462.4828] device (tapdc640852-4b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:14:22 compute-0 NetworkManager[1028]: <info>  [1760026462.4841] device (tapdc640852-4b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:14:22 compute-0 systemd-machined[77487]: New machine qemu-3-instance-00000004.
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.488 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[244865c5-6aa5-45da-b195-66914b0291b5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:14:22 compute-0 nova_compute[117331]: 2025-10-09 16:14:22.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:22 compute-0 ovn_controller[19752]: 2025-10-09T16:14:22Z|00069|binding|INFO|Setting lport dc640852-4b77-4312-9033-7ea4a70836d4 ovn-installed in OVS
Oct 09 16:14:22 compute-0 ovn_controller[19752]: 2025-10-09T16:14:22Z|00070|binding|INFO|Setting lport dc640852-4b77-4312-9033-7ea4a70836d4 up in Southbound
Oct 09 16:14:22 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000004.
Oct 09 16:14:22 compute-0 nova_compute[117331]: 2025-10-09 16:14:22.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.511 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d055e291-5d01-4bc6-95e9-af00544874cc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.544 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[cfdf5d57-d416-4755-8bfc-46899fbd8eff]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:14:22 compute-0 systemd-udevd[142534]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.550 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[13349549-a2a5-44bf-a5c1-20e7ebdd2402]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:14:22 compute-0 NetworkManager[1028]: <info>  [1760026462.5514] manager: (tap7ed4244a-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.580 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[a39fe84a-437e-4d46-8461-8fa31fb433c6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.582 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[26ecc0fa-e783-4228-8e33-eb3404d4e5da]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:14:22 compute-0 NetworkManager[1028]: <info>  [1760026462.6067] device (tap7ed4244a-b0): carrier: link connected
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.613 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[e0428382-7b30-451d-9644-638f41f5dbe9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.629 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[92b45c53-7d9f-41eb-a2ac-7d2b8abfb1da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ed4244a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:ef:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 140621, 'reachable_time': 26374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 142563, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.643 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[1802e2e8-60bd-4562-bb86-17d2c427d98a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:ef1e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 140621, 'tstamp': 140621}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 142564, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.660 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[50921195-2999-4116-81d6-a8d2a958b265]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ed4244a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:ef:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 140621, 'reachable_time': 26374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 142565, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.690 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a43b5b5d-2c7e-4b23-9340-30d6dfe6e563]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:14:22 compute-0 nova_compute[117331]: 2025-10-09 16:14:22.704 2 DEBUG nova.compute.manager [req-0df5eb0c-6dca-48b5-9507-cdbeca0a547a req-d94dba54-d8f1-4088-9c81-1de5acf66be4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Received event network-vif-plugged-dc640852-4b77-4312-9033-7ea4a70836d4 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:14:22 compute-0 nova_compute[117331]: 2025-10-09 16:14:22.704 2 DEBUG oslo_concurrency.lockutils [req-0df5eb0c-6dca-48b5-9507-cdbeca0a547a req-d94dba54-d8f1-4088-9c81-1de5acf66be4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:14:22 compute-0 nova_compute[117331]: 2025-10-09 16:14:22.704 2 DEBUG oslo_concurrency.lockutils [req-0df5eb0c-6dca-48b5-9507-cdbeca0a547a req-d94dba54-d8f1-4088-9c81-1de5acf66be4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:14:22 compute-0 nova_compute[117331]: 2025-10-09 16:14:22.705 2 DEBUG oslo_concurrency.lockutils [req-0df5eb0c-6dca-48b5-9507-cdbeca0a547a req-d94dba54-d8f1-4088-9c81-1de5acf66be4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:14:22 compute-0 nova_compute[117331]: 2025-10-09 16:14:22.705 2 DEBUG nova.compute.manager [req-0df5eb0c-6dca-48b5-9507-cdbeca0a547a req-d94dba54-d8f1-4088-9c81-1de5acf66be4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Processing event network-vif-plugged-dc640852-4b77-4312-9033-7ea4a70836d4 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.742 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a0649f-9929-4b12-b51f-c49cdd34b756]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.743 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ed4244a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.744 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.744 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ed4244a-b0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:14:22 compute-0 nova_compute[117331]: 2025-10-09 16:14:22.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:22 compute-0 NetworkManager[1028]: <info>  [1760026462.7482] manager: (tap7ed4244a-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Oct 09 16:14:22 compute-0 kernel: tap7ed4244a-b0: entered promiscuous mode
Oct 09 16:14:22 compute-0 nova_compute[117331]: 2025-10-09 16:14:22.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.752 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ed4244a-b0, col_values=(('external_ids', {'iface-id': 'fa7fd37b-f1db-4203-966c-06eaa6fa3892'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:14:22 compute-0 nova_compute[117331]: 2025-10-09 16:14:22.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:22 compute-0 ovn_controller[19752]: 2025-10-09T16:14:22Z|00071|binding|INFO|Releasing lport fa7fd37b-f1db-4203-966c-06eaa6fa3892 from this chassis (sb_readonly=0)
Oct 09 16:14:22 compute-0 nova_compute[117331]: 2025-10-09 16:14:22.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.755 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3da342bd-b44b-482b-bb98-e5623e87be12]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.756 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.756 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.756 28613 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 7ed4244a-b510-4df6-9ffd-2f86603932fc disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.756 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.757 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[77cbdd6a-e688-49ff-a2f6-cea9f6d23393]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.757 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.757 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[763a83ff-140e-4bd8-aeb1-01d2bac3ad58]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.758 28613 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: global
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     log         /dev/log local0 debug
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     log-tag     haproxy-metadata-proxy-7ed4244a-b510-4df6-9ffd-2f86603932fc
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     user        root
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     group       root
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     maxconn     1024
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     pidfile     /var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     daemon
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: defaults
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     log global
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     mode http
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     option httplog
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     option dontlognull
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     option http-server-close
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     option forwardfor
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     retries                 3
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     timeout http-request    30s
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     timeout connect         30s
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     timeout client          32s
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     timeout server          32s
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     timeout http-keep-alive 30s
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: listen listener
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     bind 169.254.169.254:80
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:     http-request add-header X-OVN-Network-ID 7ed4244a-b510-4df6-9ffd-2f86603932fc
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 09 16:14:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:22.758 28613 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'env', 'PROCESS_TAG=haproxy-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7ed4244a-b510-4df6-9ffd-2f86603932fc.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 09 16:14:22 compute-0 nova_compute[117331]: 2025-10-09 16:14:22.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:23 compute-0 podman[142604]: 2025-10-09 16:14:23.200910731 +0000 UTC m=+0.098237823 container create aad6919fe09d630d18be566733892012a924a8689214620123474a78071ae585 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 09 16:14:23 compute-0 podman[142604]: 2025-10-09 16:14:23.125714694 +0000 UTC m=+0.023041816 image pull 4e15c189a95c5dcd1203c76fe304de5c8f8616da9a258ca168cc54a517f49b87 38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 09 16:14:23 compute-0 systemd[1]: Started libpod-conmon-aad6919fe09d630d18be566733892012a924a8689214620123474a78071ae585.scope.
Oct 09 16:14:23 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e22f30b888566d0973e910de251c57b111be921aab2b334636ca34bd6b92506/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 16:14:23 compute-0 podman[142604]: 2025-10-09 16:14:23.31596205 +0000 UTC m=+0.213289162 container init aad6919fe09d630d18be566733892012a924a8689214620123474a78071ae585 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:14:23 compute-0 podman[142604]: 2025-10-09 16:14:23.321765715 +0000 UTC m=+0.219092807 container start aad6919fe09d630d18be566733892012a924a8689214620123474a78071ae585 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:14:23 compute-0 neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc[142620]: [NOTICE]   (142624) : New worker (142626) forked
Oct 09 16:14:23 compute-0 neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc[142620]: [NOTICE]   (142624) : Loading success.
Oct 09 16:14:23 compute-0 nova_compute[117331]: 2025-10-09 16:14:23.408 2 DEBUG nova.compute.manager [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:14:23 compute-0 nova_compute[117331]: 2025-10-09 16:14:23.411 2 DEBUG nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:14:23 compute-0 nova_compute[117331]: 2025-10-09 16:14:23.413 2 INFO nova.virt.libvirt.driver [-] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Instance spawned successfully.
Oct 09 16:14:23 compute-0 nova_compute[117331]: 2025-10-09 16:14:23.414 2 DEBUG nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:14:23 compute-0 sshd-session[142485]: Failed password for root from 134.199.199.215 port 42992 ssh2
Oct 09 16:14:23 compute-0 sshd-session[142485]: Connection closed by authenticating user root 134.199.199.215 port 42992 [preauth]
Oct 09 16:14:23 compute-0 nova_compute[117331]: 2025-10-09 16:14:23.925 2 DEBUG nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:14:23 compute-0 nova_compute[117331]: 2025-10-09 16:14:23.925 2 DEBUG nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:14:23 compute-0 nova_compute[117331]: 2025-10-09 16:14:23.926 2 DEBUG nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:14:23 compute-0 nova_compute[117331]: 2025-10-09 16:14:23.926 2 DEBUG nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:14:23 compute-0 nova_compute[117331]: 2025-10-09 16:14:23.926 2 DEBUG nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:14:23 compute-0 nova_compute[117331]: 2025-10-09 16:14:23.927 2 DEBUG nova.virt.libvirt.driver [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:14:24 compute-0 nova_compute[117331]: 2025-10-09 16:14:24.456 2 INFO nova.compute.manager [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Took 9.06 seconds to spawn the instance on the hypervisor.
Oct 09 16:14:24 compute-0 nova_compute[117331]: 2025-10-09 16:14:24.457 2 DEBUG nova.compute.manager [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:14:24 compute-0 nova_compute[117331]: 2025-10-09 16:14:24.778 2 DEBUG nova.compute.manager [req-1aba7b84-cf37-456e-bf28-03e7426b080d req-85b0a1b6-af6a-478b-9878-d643be34b3dd ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Received event network-vif-plugged-dc640852-4b77-4312-9033-7ea4a70836d4 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:14:24 compute-0 nova_compute[117331]: 2025-10-09 16:14:24.778 2 DEBUG oslo_concurrency.lockutils [req-1aba7b84-cf37-456e-bf28-03e7426b080d req-85b0a1b6-af6a-478b-9878-d643be34b3dd ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:14:24 compute-0 nova_compute[117331]: 2025-10-09 16:14:24.779 2 DEBUG oslo_concurrency.lockutils [req-1aba7b84-cf37-456e-bf28-03e7426b080d req-85b0a1b6-af6a-478b-9878-d643be34b3dd ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:14:24 compute-0 nova_compute[117331]: 2025-10-09 16:14:24.779 2 DEBUG oslo_concurrency.lockutils [req-1aba7b84-cf37-456e-bf28-03e7426b080d req-85b0a1b6-af6a-478b-9878-d643be34b3dd ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:14:24 compute-0 nova_compute[117331]: 2025-10-09 16:14:24.779 2 DEBUG nova.compute.manager [req-1aba7b84-cf37-456e-bf28-03e7426b080d req-85b0a1b6-af6a-478b-9878-d643be34b3dd ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] No waiting events found dispatching network-vif-plugged-dc640852-4b77-4312-9033-7ea4a70836d4 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:14:24 compute-0 nova_compute[117331]: 2025-10-09 16:14:24.779 2 WARNING nova.compute.manager [req-1aba7b84-cf37-456e-bf28-03e7426b080d req-85b0a1b6-af6a-478b-9878-d643be34b3dd ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Received unexpected event network-vif-plugged-dc640852-4b77-4312-9033-7ea4a70836d4 for instance with vm_state active and task_state None.
Oct 09 16:14:24 compute-0 unix_chkpwd[142637]: password check failed for user (root)
Oct 09 16:14:24 compute-0 sshd-session[142635]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:14:24 compute-0 nova_compute[117331]: 2025-10-09 16:14:24.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:25 compute-0 nova_compute[117331]: 2025-10-09 16:14:25.058 2 INFO nova.compute.manager [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Took 14.74 seconds to build instance.
Oct 09 16:14:25 compute-0 nova_compute[117331]: 2025-10-09 16:14:25.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:25 compute-0 nova_compute[117331]: 2025-10-09 16:14:25.616 2 DEBUG oslo_concurrency.lockutils [None req-22ab0963-25ed-4fe0-9217-535d7efa60df e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.330s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:14:25 compute-0 podman[142638]: 2025-10-09 16:14:25.826334469 +0000 UTC m=+0.054414067 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true)
Oct 09 16:14:25 compute-0 podman[142639]: 2025-10-09 16:14:25.853229516 +0000 UTC m=+0.077786751 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251007)
Oct 09 16:14:26 compute-0 sshd-session[142635]: Failed password for root from 134.199.199.215 port 43002 ssh2
Oct 09 16:14:26 compute-0 sshd-session[142635]: Connection closed by authenticating user root 134.199.199.215 port 43002 [preauth]
Oct 09 16:14:28 compute-0 sshd-session[142675]: Invalid user user from 134.199.199.215 port 37410
Oct 09 16:14:28 compute-0 sshd-session[142675]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:14:28 compute-0 sshd-session[142675]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:14:29 compute-0 podman[127775]: time="2025-10-09T16:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:14:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:14:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3478 "" "Go-http-client/1.1"
Oct 09 16:14:29 compute-0 nova_compute[117331]: 2025-10-09 16:14:29.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:30 compute-0 sshd-session[142675]: Failed password for invalid user user from 134.199.199.215 port 37410 ssh2
Oct 09 16:14:30 compute-0 nova_compute[117331]: 2025-10-09 16:14:30.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:31 compute-0 openstack_network_exporter[129925]: ERROR   16:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:14:31 compute-0 openstack_network_exporter[129925]: ERROR   16:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:14:31 compute-0 openstack_network_exporter[129925]: ERROR   16:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:14:31 compute-0 openstack_network_exporter[129925]: ERROR   16:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:14:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:14:31 compute-0 openstack_network_exporter[129925]: ERROR   16:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:14:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:14:31 compute-0 sshd-session[142677]: Invalid user init from 134.199.199.215 port 37412
Oct 09 16:14:31 compute-0 sshd-session[142677]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:14:31 compute-0 sshd-session[142677]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:14:31 compute-0 sshd-session[142675]: Connection closed by invalid user user 134.199.199.215 port 37410 [preauth]
Oct 09 16:14:33 compute-0 sshd-session[142677]: Failed password for invalid user init from 134.199.199.215 port 37412 ssh2
Oct 09 16:14:33 compute-0 sshd-session[142677]: Connection closed by invalid user init 134.199.199.215 port 37412 [preauth]
Oct 09 16:14:34 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:37422 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:14:34 compute-0 nova_compute[117331]: 2025-10-09 16:14:34.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:35.288 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:14:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:35.288 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:14:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:35.288 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:14:35 compute-0 nova_compute[117331]: 2025-10-09 16:14:35.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:35 compute-0 podman[142691]: 2025-10-09 16:14:35.828222891 +0000 UTC m=+0.052273968 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 09 16:14:36 compute-0 ovn_controller[19752]: 2025-10-09T16:14:36Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:79:a2:c5 10.100.0.7
Oct 09 16:14:36 compute-0 ovn_controller[19752]: 2025-10-09T16:14:36Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:79:a2:c5 10.100.0.7
Oct 09 16:14:37 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:58896 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:14:38 compute-0 podman[142712]: 2025-10-09 16:14:38.865222882 +0000 UTC m=+0.097739478 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:14:39 compute-0 nova_compute[117331]: 2025-10-09 16:14:39.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:40 compute-0 nova_compute[117331]: 2025-10-09 16:14:40.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:41 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:58906 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:14:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:44.000 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:14:44 compute-0 nova_compute[117331]: 2025-10-09 16:14:44.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:44.002 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:14:44 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:58910 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:14:44 compute-0 nova_compute[117331]: 2025-10-09 16:14:44.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:45 compute-0 nova_compute[117331]: 2025-10-09 16:14:45.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:47 compute-0 podman[142739]: 2025-10-09 16:14:47.855221308 +0000 UTC m=+0.091962584 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 09 16:14:47 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:51080 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:14:49 compute-0 nova_compute[117331]: 2025-10-09 16:14:49.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:50 compute-0 nova_compute[117331]: 2025-10-09 16:14:50.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:51 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:51084 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:14:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:14:52.003 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:14:52 compute-0 podman[142759]: 2025-10-09 16:14:52.825092123 +0000 UTC m=+0.058883119 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:14:54 compute-0 unix_chkpwd[142785]: password check failed for user (root)
Oct 09 16:14:54 compute-0 sshd-session[142783]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:14:54 compute-0 nova_compute[117331]: 2025-10-09 16:14:54.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:55 compute-0 nova_compute[117331]: 2025-10-09 16:14:55.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:14:56 compute-0 podman[142787]: 2025-10-09 16:14:56.828336825 +0000 UTC m=+0.060100958 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 09 16:14:56 compute-0 podman[142786]: 2025-10-09 16:14:56.850580894 +0000 UTC m=+0.085295431 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Oct 09 16:14:56 compute-0 sshd-session[142783]: Failed password for root from 134.199.199.215 port 51098 ssh2
Oct 09 16:14:58 compute-0 unix_chkpwd[142828]: password check failed for user (ftp)
Oct 09 16:14:58 compute-0 sshd-session[142826]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=ftp
Oct 09 16:14:59 compute-0 sshd-session[142783]: Connection closed by authenticating user root 134.199.199.215 port 51098 [preauth]
Oct 09 16:14:59 compute-0 podman[127775]: time="2025-10-09T16:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:14:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:14:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3483 "" "Go-http-client/1.1"
Oct 09 16:14:59 compute-0 nova_compute[117331]: 2025-10-09 16:14:59.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:00 compute-0 nova_compute[117331]: 2025-10-09 16:15:00.233 2 DEBUG oslo_concurrency.lockutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-481b51b1-c134-4c7a-af94-d3d3794a7971" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:15:00 compute-0 nova_compute[117331]: 2025-10-09 16:15:00.234 2 DEBUG oslo_concurrency.lockutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-481b51b1-c134-4c7a-af94-d3d3794a7971" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:15:00 compute-0 nova_compute[117331]: 2025-10-09 16:15:00.234 2 DEBUG nova.network.neutron [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:15:00 compute-0 nova_compute[117331]: 2025-10-09 16:15:00.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:00 compute-0 sshd-session[142826]: Failed password for ftp from 134.199.199.215 port 58250 ssh2
Oct 09 16:15:00 compute-0 nova_compute[117331]: 2025-10-09 16:15:00.767 2 WARNING neutronclient.v2_0.client [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:15:00 compute-0 sshd-session[142826]: Connection closed by authenticating user ftp 134.199.199.215 port 58250 [preauth]
Oct 09 16:15:01 compute-0 openstack_network_exporter[129925]: ERROR   16:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:15:01 compute-0 openstack_network_exporter[129925]: ERROR   16:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:15:01 compute-0 openstack_network_exporter[129925]: ERROR   16:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:15:01 compute-0 openstack_network_exporter[129925]: ERROR   16:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:15:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:15:01 compute-0 openstack_network_exporter[129925]: ERROR   16:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:15:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:15:01 compute-0 unix_chkpwd[142832]: password check failed for user (ftp)
Oct 09 16:15:01 compute-0 sshd-session[142830]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=ftp
Oct 09 16:15:02 compute-0 nova_compute[117331]: 2025-10-09 16:15:02.175 2 WARNING neutronclient.v2_0.client [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:15:02 compute-0 nova_compute[117331]: 2025-10-09 16:15:02.329 2 DEBUG nova.network.neutron [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Updating instance_info_cache with network_info: [{"id": "dc640852-4b77-4312-9033-7ea4a70836d4", "address": "fa:16:3e:79:a2:c5", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc640852-4b", "ovs_interfaceid": "dc640852-4b77-4312-9033-7ea4a70836d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:15:02 compute-0 nova_compute[117331]: 2025-10-09 16:15:02.857 2 DEBUG oslo_concurrency.lockutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-481b51b1-c134-4c7a-af94-d3d3794a7971" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:15:03 compute-0 sshd-session[142830]: Failed password for ftp from 134.199.199.215 port 58260 ssh2
Oct 09 16:15:04 compute-0 sshd-session[142830]: Connection closed by authenticating user ftp 134.199.199.215 port 58260 [preauth]
Oct 09 16:15:04 compute-0 nova_compute[117331]: 2025-10-09 16:15:04.411 2 DEBUG nova.virt.libvirt.driver [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12417
Oct 09 16:15:04 compute-0 nova_compute[117331]: 2025-10-09 16:15:04.412 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Creating file /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/e236194096b6414585918e0fde136439.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Oct 09 16:15:04 compute-0 nova_compute[117331]: 2025-10-09 16:15:04.412 2 DEBUG oslo_concurrency.processutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/e236194096b6414585918e0fde136439.tmp execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:15:04 compute-0 nova_compute[117331]: 2025-10-09 16:15:04.880 2 DEBUG oslo_concurrency.processutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/e236194096b6414585918e0fde136439.tmp" returned: 1 in 0.468s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:15:04 compute-0 nova_compute[117331]: 2025-10-09 16:15:04.881 2 DEBUG oslo_concurrency.processutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/e236194096b6414585918e0fde136439.tmp' failed. Not Retrying. execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:423
Oct 09 16:15:04 compute-0 nova_compute[117331]: 2025-10-09 16:15:04.882 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Creating directory /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971 on remote host 192.168.122.101 create_dir /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Oct 09 16:15:04 compute-0 nova_compute[117331]: 2025-10-09 16:15:04.882 2 DEBUG oslo_concurrency.processutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:15:04 compute-0 nova_compute[117331]: 2025-10-09 16:15:04.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:05 compute-0 sshd-session[142834]: Invalid user gitlab from 134.199.199.215 port 58272
Oct 09 16:15:05 compute-0 sshd-session[142834]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:15:05 compute-0 sshd-session[142834]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:15:05 compute-0 nova_compute[117331]: 2025-10-09 16:15:05.084 2 DEBUG oslo_concurrency.processutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971" returned: 0 in 0.202s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:15:05 compute-0 nova_compute[117331]: 2025-10-09 16:15:05.087 2 DEBUG nova.virt.libvirt.driver [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4247
Oct 09 16:15:05 compute-0 nova_compute[117331]: 2025-10-09 16:15:05.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:06 compute-0 podman[142847]: 2025-10-09 16:15:06.832212559 +0000 UTC m=+0.060770398 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, version=9.6, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public)
Oct 09 16:15:07 compute-0 kernel: tapdc640852-4b (unregistering): left promiscuous mode
Oct 09 16:15:07 compute-0 NetworkManager[1028]: <info>  [1760026507.2616] device (tapdc640852-4b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:15:07 compute-0 nova_compute[117331]: 2025-10-09 16:15:07.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:07 compute-0 ovn_controller[19752]: 2025-10-09T16:15:07Z|00072|binding|INFO|Releasing lport dc640852-4b77-4312-9033-7ea4a70836d4 from this chassis (sb_readonly=0)
Oct 09 16:15:07 compute-0 ovn_controller[19752]: 2025-10-09T16:15:07Z|00073|binding|INFO|Setting lport dc640852-4b77-4312-9033-7ea4a70836d4 down in Southbound
Oct 09 16:15:07 compute-0 ovn_controller[19752]: 2025-10-09T16:15:07Z|00074|binding|INFO|Removing iface tapdc640852-4b ovn-installed in OVS
Oct 09 16:15:07 compute-0 nova_compute[117331]: 2025-10-09 16:15:07.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:07 compute-0 nova_compute[117331]: 2025-10-09 16:15:07.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:07 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:07.301 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:a2:c5 10.100.0.7'], port_security=['fa:16:3e:79:a2:c5 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '481b51b1-c134-4c7a-af94-d3d3794a7971', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be4e2f9059cd48f5b44a612256e3fc7b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '92d94554-4438-4b2d-9e48-eb178205b5a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8754009b-0d8f-4d41-af4c-a82e7b57a4f6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=dc640852-4b77-4312-9033-7ea4a70836d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:15:07 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:07.303 28613 INFO neutron.agent.ovn.metadata.agent [-] Port dc640852-4b77-4312-9033-7ea4a70836d4 in datapath 7ed4244a-b510-4df6-9ffd-2f86603932fc unbound from our chassis
Oct 09 16:15:07 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:07.305 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7ed4244a-b510-4df6-9ffd-2f86603932fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:15:07 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:07.305 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[6ef9df19-a114-479b-b0c7-38c54078ea56]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:07 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:07.306 28613 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc namespace which is not needed anymore
Oct 09 16:15:07 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct 09 16:15:07 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Consumed 13.702s CPU time.
Oct 09 16:15:07 compute-0 systemd-machined[77487]: Machine qemu-3-instance-00000004 terminated.
Oct 09 16:15:07 compute-0 neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc[142620]: [NOTICE]   (142624) : haproxy version is 3.0.5-8e879a5
Oct 09 16:15:07 compute-0 podman[142892]: 2025-10-09 16:15:07.431677274 +0000 UTC m=+0.032756386 container kill aad6919fe09d630d18be566733892012a924a8689214620123474a78071ae585 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 09 16:15:07 compute-0 neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc[142620]: [NOTICE]   (142624) : path to executable is /usr/sbin/haproxy
Oct 09 16:15:07 compute-0 neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc[142620]: [WARNING]  (142624) : Exiting Master process...
Oct 09 16:15:07 compute-0 neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc[142620]: [ALERT]    (142624) : Current worker (142626) exited with code 143 (Terminated)
Oct 09 16:15:07 compute-0 neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc[142620]: [WARNING]  (142624) : All workers exited. Exiting... (0)
Oct 09 16:15:07 compute-0 systemd[1]: libpod-aad6919fe09d630d18be566733892012a924a8689214620123474a78071ae585.scope: Deactivated successfully.
Oct 09 16:15:07 compute-0 podman[142907]: 2025-10-09 16:15:07.469491131 +0000 UTC m=+0.023786090 container died aad6919fe09d630d18be566733892012a924a8689214620123474a78071ae585 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Oct 09 16:15:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aad6919fe09d630d18be566733892012a924a8689214620123474a78071ae585-userdata-shm.mount: Deactivated successfully.
Oct 09 16:15:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e22f30b888566d0973e910de251c57b111be921aab2b334636ca34bd6b92506-merged.mount: Deactivated successfully.
Oct 09 16:15:07 compute-0 sshd-session[142834]: Failed password for invalid user gitlab from 134.199.199.215 port 58272 ssh2
Oct 09 16:15:07 compute-0 podman[142907]: 2025-10-09 16:15:07.934741376 +0000 UTC m=+0.489036315 container cleanup aad6919fe09d630d18be566733892012a924a8689214620123474a78071ae585 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4)
Oct 09 16:15:07 compute-0 systemd[1]: libpod-conmon-aad6919fe09d630d18be566733892012a924a8689214620123474a78071ae585.scope: Deactivated successfully.
Oct 09 16:15:07 compute-0 nova_compute[117331]: 2025-10-09 16:15:07.974 2 DEBUG nova.compute.manager [req-98a046a9-d07d-411c-ab53-24d868589188 req-3a719241-e95c-416f-9228-6e3dc8f35264 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Received event network-vif-unplugged-dc640852-4b77-4312-9033-7ea4a70836d4 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:15:07 compute-0 nova_compute[117331]: 2025-10-09 16:15:07.975 2 DEBUG oslo_concurrency.lockutils [req-98a046a9-d07d-411c-ab53-24d868589188 req-3a719241-e95c-416f-9228-6e3dc8f35264 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:15:07 compute-0 nova_compute[117331]: 2025-10-09 16:15:07.975 2 DEBUG oslo_concurrency.lockutils [req-98a046a9-d07d-411c-ab53-24d868589188 req-3a719241-e95c-416f-9228-6e3dc8f35264 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:15:07 compute-0 nova_compute[117331]: 2025-10-09 16:15:07.976 2 DEBUG oslo_concurrency.lockutils [req-98a046a9-d07d-411c-ab53-24d868589188 req-3a719241-e95c-416f-9228-6e3dc8f35264 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:15:07 compute-0 nova_compute[117331]: 2025-10-09 16:15:07.976 2 DEBUG nova.compute.manager [req-98a046a9-d07d-411c-ab53-24d868589188 req-3a719241-e95c-416f-9228-6e3dc8f35264 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] No waiting events found dispatching network-vif-unplugged-dc640852-4b77-4312-9033-7ea4a70836d4 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:15:07 compute-0 nova_compute[117331]: 2025-10-09 16:15:07.976 2 WARNING nova.compute.manager [req-98a046a9-d07d-411c-ab53-24d868589188 req-3a719241-e95c-416f-9228-6e3dc8f35264 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Received unexpected event network-vif-unplugged-dc640852-4b77-4312-9033-7ea4a70836d4 for instance with vm_state active and task_state resize_migrating.
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.104 2 INFO nova.virt.libvirt.driver [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Instance shutdown successfully after 3 seconds.
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.110 2 INFO nova.virt.libvirt.driver [-] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Instance destroyed successfully.
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.111 2 DEBUG nova.virt.libvirt.vif [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1479250465',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1479250465',id=4,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:14:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='be4e2f9059cd48f5b44a612256e3fc7b',ramdisk_id='',reservation_id='r-7c00rued',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestExecuteActionsViaActuator-1347788182',owner_user_name='tempest-TestExecuteActionsViaActuator-1347788182-project-admin'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:14:56Z,user_data=None,user_id='e5044998ddc3419bb14cc08417add581',uuid=481b51b1-c134-4c7a-af94-d3d3794a7971,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc640852-4b77-4312-9033-7ea4a70836d4", "address": "fa:16:3e:79:a2:c5", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "vif_mac": "fa:16:3e:79:a2:c5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc640852-4b", "ovs_interfaceid": "dc640852-4b77-4312-9033-7ea4a70836d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.112 2 DEBUG nova.network.os_vif_util [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "dc640852-4b77-4312-9033-7ea4a70836d4", "address": "fa:16:3e:79:a2:c5", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "vif_mac": "fa:16:3e:79:a2:c5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc640852-4b", "ovs_interfaceid": "dc640852-4b77-4312-9033-7ea4a70836d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.112 2 DEBUG nova.network.os_vif_util [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:79:a2:c5,bridge_name='br-int',has_traffic_filtering=True,id=dc640852-4b77-4312-9033-7ea4a70836d4,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc640852-4b') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.113 2 DEBUG os_vif [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:79:a2:c5,bridge_name='br-int',has_traffic_filtering=True,id=dc640852-4b77-4312-9033-7ea4a70836d4,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc640852-4b') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.117 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc640852-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.121 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=b64ef92e-d9ef-4b7a-b42b-a6bcde7e0e26) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.128 2 INFO os_vif [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:79:a2:c5,bridge_name='br-int',has_traffic_filtering=True,id=dc640852-4b77-4312-9033-7ea4a70836d4,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc640852-4b')
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.136 2 DEBUG oslo_concurrency.processutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.224 2 DEBUG oslo_concurrency.processutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.227 2 DEBUG oslo_concurrency.processutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.291 2 DEBUG oslo_concurrency.processutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.293 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Copying file /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971_resize/disk to 192.168.122.101:/var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk copy_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/remotefs.py:103
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.293 2 DEBUG oslo_concurrency.processutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): scp -r /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971_resize/disk 192.168.122.101:/var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:15:08 compute-0 podman[142922]: 2025-10-09 16:15:08.342401335 +0000 UTC m=+0.862371039 container remove aad6919fe09d630d18be566733892012a924a8689214620123474a78071ae585 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Oct 09 16:15:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:08.348 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[6920dbea-1c51-4d79-8f6e-4d93da8c48fa]: (4, ("Thu Oct  9 04:15:07 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc (aad6919fe09d630d18be566733892012a924a8689214620123474a78071ae585)\naad6919fe09d630d18be566733892012a924a8689214620123474a78071ae585\nThu Oct  9 04:15:07 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc (aad6919fe09d630d18be566733892012a924a8689214620123474a78071ae585)\naad6919fe09d630d18be566733892012a924a8689214620123474a78071ae585\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:08.350 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[2f876c4d-4e60-4ad4-8028-552ddd29ab39]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:08.350 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:15:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:08.351 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[cb33dd56-6bc4-4f6f-a1e2-a1503ea67469]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:08.352 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ed4244a-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:15:08 compute-0 kernel: tap7ed4244a-b0: left promiscuous mode
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:08 compute-0 nova_compute[117331]: 2025-10-09 16:15:08.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:08.372 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[670022f2-b98c-4e8d-aff3-164841e9f520]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:08 compute-0 sshd-session[142965]: Invalid user centos from 134.199.199.215 port 52466
Oct 09 16:15:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:08.400 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[9f3368cd-1cd2-497a-99d3-4f2c7129182b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:08.402 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[4cfc6183-e11b-4b0d-a3f1-a7ca99f000a2]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:08.418 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[8659662e-130d-4a00-9f9b-f8e3a12287e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 140614, 'reachable_time': 25114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 142973, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:08.420 28727 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 09 16:15:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:08.420 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[9140f50b-29b2-4c1d-ab10-15bc1c0e53e0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d7ed4244a\x2db510\x2d4df6\x2d9ffd\x2d2f86603932fc.mount: Deactivated successfully.
Oct 09 16:15:08 compute-0 sshd-session[142965]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:15:08 compute-0 sshd-session[142965]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:15:08 compute-0 sshd-session[142834]: Connection closed by invalid user gitlab 134.199.199.215 port 58272 [preauth]
Oct 09 16:15:09 compute-0 nova_compute[117331]: 2025-10-09 16:15:09.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:15:09 compute-0 nova_compute[117331]: 2025-10-09 16:15:09.423 2 DEBUG oslo_concurrency.processutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "scp -r /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971_resize/disk 192.168.122.101:/var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk" returned: 0 in 1.130s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:15:09 compute-0 nova_compute[117331]: 2025-10-09 16:15:09.425 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Copying file /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971_resize/disk.config to 192.168.122.101:/var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk.config copy_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/remotefs.py:103
Oct 09 16:15:09 compute-0 nova_compute[117331]: 2025-10-09 16:15:09.425 2 DEBUG oslo_concurrency.processutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): scp -C -r /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971_resize/disk.config 192.168.122.101:/var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk.config execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:15:09 compute-0 nova_compute[117331]: 2025-10-09 16:15:09.659 2 DEBUG oslo_concurrency.processutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "scp -C -r /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971_resize/disk.config 192.168.122.101:/var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk.config" returned: 0 in 0.234s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:15:09 compute-0 nova_compute[117331]: 2025-10-09 16:15:09.660 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Copying file /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971_resize/disk.info to 192.168.122.101:/var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk.info copy_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/remotefs.py:103
Oct 09 16:15:09 compute-0 nova_compute[117331]: 2025-10-09 16:15:09.660 2 DEBUG oslo_concurrency.processutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): scp -C -r /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971_resize/disk.info 192.168.122.101:/var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk.info execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:15:09 compute-0 nova_compute[117331]: 2025-10-09 16:15:09.882 2 DEBUG oslo_concurrency.processutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "scp -C -r /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971_resize/disk.info 192.168.122.101:/var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk.info" returned: 0 in 0.221s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:15:09 compute-0 podman[142978]: 2025-10-09 16:15:09.882607078 +0000 UTC m=+0.112411386 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 09 16:15:09 compute-0 nova_compute[117331]: 2025-10-09 16:15:09.884 2 WARNING neutronclient.v2_0.client [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:15:09 compute-0 nova_compute[117331]: 2025-10-09 16:15:09.885 2 WARNING neutronclient.v2_0.client [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:15:10 compute-0 sshd-session[142965]: Failed password for invalid user centos from 134.199.199.215 port 52466 ssh2
Oct 09 16:15:10 compute-0 nova_compute[117331]: 2025-10-09 16:15:10.290 2 DEBUG nova.compute.manager [req-0cd49164-3a1b-4089-8b2d-42fb19832939 req-ffb0fb3a-3c10-44a9-bff3-496a3d47c223 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Received event network-vif-unplugged-dc640852-4b77-4312-9033-7ea4a70836d4 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:15:10 compute-0 nova_compute[117331]: 2025-10-09 16:15:10.291 2 DEBUG oslo_concurrency.lockutils [req-0cd49164-3a1b-4089-8b2d-42fb19832939 req-ffb0fb3a-3c10-44a9-bff3-496a3d47c223 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:15:10 compute-0 nova_compute[117331]: 2025-10-09 16:15:10.291 2 DEBUG oslo_concurrency.lockutils [req-0cd49164-3a1b-4089-8b2d-42fb19832939 req-ffb0fb3a-3c10-44a9-bff3-496a3d47c223 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:15:10 compute-0 nova_compute[117331]: 2025-10-09 16:15:10.291 2 DEBUG oslo_concurrency.lockutils [req-0cd49164-3a1b-4089-8b2d-42fb19832939 req-ffb0fb3a-3c10-44a9-bff3-496a3d47c223 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:15:10 compute-0 nova_compute[117331]: 2025-10-09 16:15:10.292 2 DEBUG nova.compute.manager [req-0cd49164-3a1b-4089-8b2d-42fb19832939 req-ffb0fb3a-3c10-44a9-bff3-496a3d47c223 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] No waiting events found dispatching network-vif-unplugged-dc640852-4b77-4312-9033-7ea4a70836d4 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:15:10 compute-0 nova_compute[117331]: 2025-10-09 16:15:10.292 2 WARNING nova.compute.manager [req-0cd49164-3a1b-4089-8b2d-42fb19832939 req-ffb0fb3a-3c10-44a9-bff3-496a3d47c223 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Received unexpected event network-vif-unplugged-dc640852-4b77-4312-9033-7ea4a70836d4 for instance with vm_state active and task_state resize_migrating.
Oct 09 16:15:10 compute-0 nova_compute[117331]: 2025-10-09 16:15:10.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:15:10 compute-0 nova_compute[117331]: 2025-10-09 16:15:10.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:10 compute-0 sshd-session[142965]: Connection closed by invalid user centos 134.199.199.215 port 52466 [preauth]
Oct 09 16:15:10 compute-0 nova_compute[117331]: 2025-10-09 16:15:10.771 2 DEBUG neutronclient.v2_0.client [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port dc640852-4b77-4312-9033-7ea4a70836d4 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.12/site-packages/neutronclient/v2_0/client.py:265
Oct 09 16:15:10 compute-0 nova_compute[117331]: 2025-10-09 16:15:10.878 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:15:10 compute-0 nova_compute[117331]: 2025-10-09 16:15:10.878 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:15:10 compute-0 nova_compute[117331]: 2025-10-09 16:15:10.878 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:15:10 compute-0 nova_compute[117331]: 2025-10-09 16:15:10.878 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:15:11 compute-0 sshd-session[143005]: Invalid user elasticsearch from 134.199.199.215 port 52476
Oct 09 16:15:11 compute-0 sshd-session[143005]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:15:11 compute-0 sshd-session[143005]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:15:11 compute-0 nova_compute[117331]: 2025-10-09 16:15:11.830 2 DEBUG oslo_concurrency.lockutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:15:11 compute-0 nova_compute[117331]: 2025-10-09 16:15:11.831 2 DEBUG oslo_concurrency.lockutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:15:12 compute-0 nova_compute[117331]: 2025-10-09 16:15:12.114 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Periodic task is updating the host stats, it is trying to get disk info for instance-00000004, but the backing disk storage was removed by a concurrent operation such as resize. Error: No disk at /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk: nova.exception.DiskNotFound: No disk at /var/lib/nova/instances/481b51b1-c134-4c7a-af94-d3d3794a7971/disk
Oct 09 16:15:12 compute-0 nova_compute[117331]: 2025-10-09 16:15:12.247 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:15:12 compute-0 nova_compute[117331]: 2025-10-09 16:15:12.248 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:15:12 compute-0 nova_compute[117331]: 2025-10-09 16:15:12.273 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.025s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:15:12 compute-0 nova_compute[117331]: 2025-10-09 16:15:12.274 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6141MB free_disk=73.24142074584961GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:15:12 compute-0 nova_compute[117331]: 2025-10-09 16:15:12.274 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:15:12 compute-0 nova_compute[117331]: 2025-10-09 16:15:12.274 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:15:12 compute-0 nova_compute[117331]: 2025-10-09 16:15:12.424 2 INFO nova.compute.rpcapi [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Automatically selected compute RPC version 6.4 from minimum service version 70
Oct 09 16:15:12 compute-0 nova_compute[117331]: 2025-10-09 16:15:12.425 2 DEBUG oslo_concurrency.lockutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:15:12 compute-0 nova_compute[117331]: 2025-10-09 16:15:12.586 2 DEBUG oslo_concurrency.lockutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:15:12 compute-0 nova_compute[117331]: 2025-10-09 16:15:12.587 2 DEBUG oslo_concurrency.lockutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:15:12 compute-0 nova_compute[117331]: 2025-10-09 16:15:12.587 2 DEBUG oslo_concurrency.lockutils [None req-026a7915-d8a6-4493-80d1-6eae3deb413a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:15:13 compute-0 nova_compute[117331]: 2025-10-09 16:15:13.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:13 compute-0 nova_compute[117331]: 2025-10-09 16:15:13.393 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Migration for instance 481b51b1-c134-4c7a-af94-d3d3794a7971 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 09 16:15:13 compute-0 nova_compute[117331]: 2025-10-09 16:15:13.917 2 INFO nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Updating resource usage from migration 22e9bfdb-97e9-4f17-9af1-e0d57951d553
Oct 09 16:15:13 compute-0 nova_compute[117331]: 2025-10-09 16:15:13.918 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Starting to track outgoing migration 22e9bfdb-97e9-4f17-9af1-e0d57951d553 with flavor 5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3 _update_usage_from_migration /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1549
Oct 09 16:15:13 compute-0 nova_compute[117331]: 2025-10-09 16:15:13.946 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Migration 22e9bfdb-97e9-4f17-9af1-e0d57951d553 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:15:13 compute-0 nova_compute[117331]: 2025-10-09 16:15:13.946 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:15:13 compute-0 nova_compute[117331]: 2025-10-09 16:15:13.946 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:15:12 up 24 min,  0 user,  load average: 0.27, 0.25, 0.30\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:15:13 compute-0 sshd-session[143005]: Failed password for invalid user elasticsearch from 134.199.199.215 port 52476 ssh2
Oct 09 16:15:13 compute-0 nova_compute[117331]: 2025-10-09 16:15:13.983 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:15:14 compute-0 sshd-session[143005]: Connection closed by invalid user elasticsearch 134.199.199.215 port 52476 [preauth]
Oct 09 16:15:14 compute-0 nova_compute[117331]: 2025-10-09 16:15:14.402 2 DEBUG nova.compute.manager [req-a4d82141-aea7-471f-bd42-98dfbcc64bf0 req-6f40ea12-a516-4837-bbf8-e7d05fcf5800 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Received event network-changed-dc640852-4b77-4312-9033-7ea4a70836d4 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:15:14 compute-0 nova_compute[117331]: 2025-10-09 16:15:14.402 2 DEBUG nova.compute.manager [req-a4d82141-aea7-471f-bd42-98dfbcc64bf0 req-6f40ea12-a516-4837-bbf8-e7d05fcf5800 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Refreshing instance network info cache due to event network-changed-dc640852-4b77-4312-9033-7ea4a70836d4. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:15:14 compute-0 nova_compute[117331]: 2025-10-09 16:15:14.402 2 DEBUG oslo_concurrency.lockutils [req-a4d82141-aea7-471f-bd42-98dfbcc64bf0 req-6f40ea12-a516-4837-bbf8-e7d05fcf5800 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-481b51b1-c134-4c7a-af94-d3d3794a7971" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:15:14 compute-0 nova_compute[117331]: 2025-10-09 16:15:14.403 2 DEBUG oslo_concurrency.lockutils [req-a4d82141-aea7-471f-bd42-98dfbcc64bf0 req-6f40ea12-a516-4837-bbf8-e7d05fcf5800 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-481b51b1-c134-4c7a-af94-d3d3794a7971" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:15:14 compute-0 nova_compute[117331]: 2025-10-09 16:15:14.403 2 DEBUG nova.network.neutron [req-a4d82141-aea7-471f-bd42-98dfbcc64bf0 req-6f40ea12-a516-4837-bbf8-e7d05fcf5800 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Refreshing network info cache for port dc640852-4b77-4312-9033-7ea4a70836d4 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:15:14 compute-0 nova_compute[117331]: 2025-10-09 16:15:14.491 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:15:14 compute-0 nova_compute[117331]: 2025-10-09 16:15:14.908 2 WARNING neutronclient.v2_0.client [req-a4d82141-aea7-471f-bd42-98dfbcc64bf0 req-6f40ea12-a516-4837-bbf8-e7d05fcf5800 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:15:15 compute-0 nova_compute[117331]: 2025-10-09 16:15:15.019 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:15:15 compute-0 nova_compute[117331]: 2025-10-09 16:15:15.019 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.745s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:15:15 compute-0 unix_chkpwd[143010]: password check failed for user (root)
Oct 09 16:15:15 compute-0 sshd-session[143008]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:15:15 compute-0 nova_compute[117331]: 2025-10-09 16:15:15.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:15 compute-0 nova_compute[117331]: 2025-10-09 16:15:15.750 2 WARNING neutronclient.v2_0.client [req-a4d82141-aea7-471f-bd42-98dfbcc64bf0 req-6f40ea12-a516-4837-bbf8-e7d05fcf5800 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:15:15 compute-0 nova_compute[117331]: 2025-10-09 16:15:15.936 2 DEBUG nova.network.neutron [req-a4d82141-aea7-471f-bd42-98dfbcc64bf0 req-6f40ea12-a516-4837-bbf8-e7d05fcf5800 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Updated VIF entry in instance network info cache for port dc640852-4b77-4312-9033-7ea4a70836d4. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Oct 09 16:15:15 compute-0 nova_compute[117331]: 2025-10-09 16:15:15.937 2 DEBUG nova.network.neutron [req-a4d82141-aea7-471f-bd42-98dfbcc64bf0 req-6f40ea12-a516-4837-bbf8-e7d05fcf5800 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Updating instance_info_cache with network_info: [{"id": "dc640852-4b77-4312-9033-7ea4a70836d4", "address": "fa:16:3e:79:a2:c5", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc640852-4b", "ovs_interfaceid": "dc640852-4b77-4312-9033-7ea4a70836d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:15:16 compute-0 nova_compute[117331]: 2025-10-09 16:15:16.015 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:15:16 compute-0 nova_compute[117331]: 2025-10-09 16:15:16.016 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:15:16 compute-0 nova_compute[117331]: 2025-10-09 16:15:16.016 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:15:16 compute-0 nova_compute[117331]: 2025-10-09 16:15:16.016 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:15:16 compute-0 nova_compute[117331]: 2025-10-09 16:15:16.016 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:15:16 compute-0 nova_compute[117331]: 2025-10-09 16:15:16.017 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:15:16 compute-0 nova_compute[117331]: 2025-10-09 16:15:16.460 2 DEBUG oslo_concurrency.lockutils [req-a4d82141-aea7-471f-bd42-98dfbcc64bf0 req-6f40ea12-a516-4837-bbf8-e7d05fcf5800 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-481b51b1-c134-4c7a-af94-d3d3794a7971" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:15:16 compute-0 sshd-session[143008]: Failed password for root from 134.199.199.215 port 52482 ssh2
Oct 09 16:15:17 compute-0 sshd-session[143008]: Connection closed by authenticating user root 134.199.199.215 port 52482 [preauth]
Oct 09 16:15:18 compute-0 nova_compute[117331]: 2025-10-09 16:15:18.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:18 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:53810 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:15:18 compute-0 podman[143011]: 2025-10-09 16:15:18.838443144 +0000 UTC m=+0.070963334 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 16:15:20 compute-0 nova_compute[117331]: 2025-10-09 16:15:20.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:20 compute-0 nova_compute[117331]: 2025-10-09 16:15:20.960 2 DEBUG nova.compute.manager [req-61989db3-1973-41f8-9289-e960fea93522 req-aa4edd08-6bdf-4770-acc8-4d845d82315e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Received event network-vif-plugged-dc640852-4b77-4312-9033-7ea4a70836d4 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:15:20 compute-0 nova_compute[117331]: 2025-10-09 16:15:20.960 2 DEBUG oslo_concurrency.lockutils [req-61989db3-1973-41f8-9289-e960fea93522 req-aa4edd08-6bdf-4770-acc8-4d845d82315e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:15:20 compute-0 nova_compute[117331]: 2025-10-09 16:15:20.961 2 DEBUG oslo_concurrency.lockutils [req-61989db3-1973-41f8-9289-e960fea93522 req-aa4edd08-6bdf-4770-acc8-4d845d82315e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:15:20 compute-0 nova_compute[117331]: 2025-10-09 16:15:20.962 2 DEBUG oslo_concurrency.lockutils [req-61989db3-1973-41f8-9289-e960fea93522 req-aa4edd08-6bdf-4770-acc8-4d845d82315e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:15:20 compute-0 nova_compute[117331]: 2025-10-09 16:15:20.962 2 DEBUG nova.compute.manager [req-61989db3-1973-41f8-9289-e960fea93522 req-aa4edd08-6bdf-4770-acc8-4d845d82315e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] No waiting events found dispatching network-vif-plugged-dc640852-4b77-4312-9033-7ea4a70836d4 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:15:20 compute-0 nova_compute[117331]: 2025-10-09 16:15:20.963 2 WARNING nova.compute.manager [req-61989db3-1973-41f8-9289-e960fea93522 req-aa4edd08-6bdf-4770-acc8-4d845d82315e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Received unexpected event network-vif-plugged-dc640852-4b77-4312-9033-7ea4a70836d4 for instance with vm_state active and task_state resize_finish.
Oct 09 16:15:21 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:53812 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:15:23 compute-0 nova_compute[117331]: 2025-10-09 16:15:23.053 2 DEBUG nova.compute.manager [req-10b2d9ea-aaa0-4ccd-9a56-d0d8e31e76af req-34a732b1-d7af-462f-8e6f-67457e67d991 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Received event network-vif-plugged-dc640852-4b77-4312-9033-7ea4a70836d4 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:15:23 compute-0 nova_compute[117331]: 2025-10-09 16:15:23.054 2 DEBUG oslo_concurrency.lockutils [req-10b2d9ea-aaa0-4ccd-9a56-d0d8e31e76af req-34a732b1-d7af-462f-8e6f-67457e67d991 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:15:23 compute-0 nova_compute[117331]: 2025-10-09 16:15:23.054 2 DEBUG oslo_concurrency.lockutils [req-10b2d9ea-aaa0-4ccd-9a56-d0d8e31e76af req-34a732b1-d7af-462f-8e6f-67457e67d991 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:15:23 compute-0 nova_compute[117331]: 2025-10-09 16:15:23.054 2 DEBUG oslo_concurrency.lockutils [req-10b2d9ea-aaa0-4ccd-9a56-d0d8e31e76af req-34a732b1-d7af-462f-8e6f-67457e67d991 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:15:23 compute-0 nova_compute[117331]: 2025-10-09 16:15:23.054 2 DEBUG nova.compute.manager [req-10b2d9ea-aaa0-4ccd-9a56-d0d8e31e76af req-34a732b1-d7af-462f-8e6f-67457e67d991 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] No waiting events found dispatching network-vif-plugged-dc640852-4b77-4312-9033-7ea4a70836d4 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:15:23 compute-0 nova_compute[117331]: 2025-10-09 16:15:23.055 2 WARNING nova.compute.manager [req-10b2d9ea-aaa0-4ccd-9a56-d0d8e31e76af req-34a732b1-d7af-462f-8e6f-67457e67d991 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Received unexpected event network-vif-plugged-dc640852-4b77-4312-9033-7ea4a70836d4 for instance with vm_state resized and task_state None.
Oct 09 16:15:23 compute-0 nova_compute[117331]: 2025-10-09 16:15:23.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:23 compute-0 podman[143031]: 2025-10-09 16:15:23.880369796 +0000 UTC m=+0.096729276 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 09 16:15:24 compute-0 nova_compute[117331]: 2025-10-09 16:15:24.004 2 DEBUG oslo_concurrency.lockutils [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "481b51b1-c134-4c7a-af94-d3d3794a7971" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:15:24 compute-0 nova_compute[117331]: 2025-10-09 16:15:24.005 2 DEBUG oslo_concurrency.lockutils [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:15:24 compute-0 nova_compute[117331]: 2025-10-09 16:15:24.005 2 DEBUG nova.compute.manager [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Going to confirm migration 1 do_confirm_resize /usr/lib/python3.12/site-packages/nova/compute/manager.py:5283
Oct 09 16:15:24 compute-0 nova_compute[117331]: 2025-10-09 16:15:24.520 2 DEBUG nova.objects.instance [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'info_cache' on Instance uuid 481b51b1-c134-4c7a-af94-d3d3794a7971 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:15:25 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:53826 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:15:25 compute-0 nova_compute[117331]: 2025-10-09 16:15:25.037 2 WARNING neutronclient.v2_0.client [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:15:25 compute-0 nova_compute[117331]: 2025-10-09 16:15:25.466 2 WARNING neutronclient.v2_0.client [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:15:25 compute-0 nova_compute[117331]: 2025-10-09 16:15:25.466 2 WARNING neutronclient.v2_0.client [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:15:25 compute-0 nova_compute[117331]: 2025-10-09 16:15:25.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:25 compute-0 nova_compute[117331]: 2025-10-09 16:15:25.559 2 DEBUG neutronclient.v2_0.client [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port dc640852-4b77-4312-9033-7ea4a70836d4 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.12/site-packages/neutronclient/v2_0/client.py:265
Oct 09 16:15:25 compute-0 nova_compute[117331]: 2025-10-09 16:15:25.560 2 DEBUG oslo_concurrency.lockutils [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-481b51b1-c134-4c7a-af94-d3d3794a7971" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:15:25 compute-0 nova_compute[117331]: 2025-10-09 16:15:25.560 2 DEBUG oslo_concurrency.lockutils [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-481b51b1-c134-4c7a-af94-d3d3794a7971" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:15:25 compute-0 nova_compute[117331]: 2025-10-09 16:15:25.560 2 DEBUG nova.network.neutron [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:15:26 compute-0 nova_compute[117331]: 2025-10-09 16:15:26.068 2 WARNING neutronclient.v2_0.client [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:15:26 compute-0 nova_compute[117331]: 2025-10-09 16:15:26.598 2 WARNING neutronclient.v2_0.client [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:15:26 compute-0 nova_compute[117331]: 2025-10-09 16:15:26.630 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:15:26 compute-0 nova_compute[117331]: 2025-10-09 16:15:26.631 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:15:26 compute-0 nova_compute[117331]: 2025-10-09 16:15:26.742 2 DEBUG nova.network.neutron [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 481b51b1-c134-4c7a-af94-d3d3794a7971] Updating instance_info_cache with network_info: [{"id": "dc640852-4b77-4312-9033-7ea4a70836d4", "address": "fa:16:3e:79:a2:c5", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc640852-4b", "ovs_interfaceid": "dc640852-4b77-4312-9033-7ea4a70836d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:15:27 compute-0 nova_compute[117331]: 2025-10-09 16:15:27.136 2 DEBUG nova.compute.manager [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:15:27 compute-0 nova_compute[117331]: 2025-10-09 16:15:27.249 2 DEBUG oslo_concurrency.lockutils [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-481b51b1-c134-4c7a-af94-d3d3794a7971" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:15:27 compute-0 nova_compute[117331]: 2025-10-09 16:15:27.250 2 DEBUG nova.objects.instance [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'migration_context' on Instance uuid 481b51b1-c134-4c7a-af94-d3d3794a7971 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:15:27 compute-0 nova_compute[117331]: 2025-10-09 16:15:27.690 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:15:27 compute-0 nova_compute[117331]: 2025-10-09 16:15:27.690 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:15:27 compute-0 nova_compute[117331]: 2025-10-09 16:15:27.696 2 DEBUG nova.virt.hardware [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:15:27 compute-0 nova_compute[117331]: 2025-10-09 16:15:27.696 2 INFO nova.compute.claims [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:15:27 compute-0 nova_compute[117331]: 2025-10-09 16:15:27.756 2 DEBUG nova.objects.base [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Object Instance<481b51b1-c134-4c7a-af94-d3d3794a7971> lazy-loaded attributes: info_cache,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 09 16:15:27 compute-0 nova_compute[117331]: 2025-10-09 16:15:27.771 2 DEBUG nova.virt.libvirt.vif [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-09T16:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1479250465',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1479250465',id=4,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:15:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='be4e2f9059cd48f5b44a612256e3fc7b',ramdisk_id='',reservation_id='r-7c00rued',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestExecuteActionsViaActuator-1347788182',owner_user_name='tempest-TestExecuteActionsViaActuator-1347788182-project-admin'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:15:21Z,user_data=None,user_id='e5044998ddc3419bb14cc08417add581',uuid=481b51b1-c134-4c7a-af94-d3d3794a7971,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "dc640852-4b77-4312-9033-7ea4a70836d4", "address": "fa:16:3e:79:a2:c5", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc640852-4b", "ovs_interfaceid": "dc640852-4b77-4312-9033-7ea4a70836d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:15:27 compute-0 nova_compute[117331]: 2025-10-09 16:15:27.772 2 DEBUG nova.network.os_vif_util [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "dc640852-4b77-4312-9033-7ea4a70836d4", "address": "fa:16:3e:79:a2:c5", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc640852-4b", "ovs_interfaceid": "dc640852-4b77-4312-9033-7ea4a70836d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:15:27 compute-0 nova_compute[117331]: 2025-10-09 16:15:27.772 2 DEBUG nova.network.os_vif_util [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:79:a2:c5,bridge_name='br-int',has_traffic_filtering=True,id=dc640852-4b77-4312-9033-7ea4a70836d4,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc640852-4b') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:15:27 compute-0 nova_compute[117331]: 2025-10-09 16:15:27.773 2 DEBUG os_vif [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:79:a2:c5,bridge_name='br-int',has_traffic_filtering=True,id=dc640852-4b77-4312-9033-7ea4a70836d4,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc640852-4b') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:15:27 compute-0 nova_compute[117331]: 2025-10-09 16:15:27.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:27 compute-0 nova_compute[117331]: 2025-10-09 16:15:27.775 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc640852-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:15:27 compute-0 nova_compute[117331]: 2025-10-09 16:15:27.775 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:15:27 compute-0 nova_compute[117331]: 2025-10-09 16:15:27.777 2 INFO os_vif [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:79:a2:c5,bridge_name='br-int',has_traffic_filtering=True,id=dc640852-4b77-4312-9033-7ea4a70836d4,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc640852-4b')
Oct 09 16:15:27 compute-0 nova_compute[117331]: 2025-10-09 16:15:27.778 2 DEBUG oslo_concurrency.lockutils [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:15:27 compute-0 podman[143056]: 2025-10-09 16:15:27.835286875 +0000 UTC m=+0.047303339 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest)
Oct 09 16:15:27 compute-0 podman[143057]: 2025-10-09 16:15:27.862282946 +0000 UTC m=+0.080733005 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, config_id=iscsid, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 09 16:15:28 compute-0 nova_compute[117331]: 2025-10-09 16:15:28.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:28 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:57620 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:15:28 compute-0 nova_compute[117331]: 2025-10-09 16:15:28.766 2 DEBUG nova.compute.provider_tree [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:15:29 compute-0 nova_compute[117331]: 2025-10-09 16:15:29.273 2 DEBUG nova.scheduler.client.report [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:15:29 compute-0 podman[127775]: time="2025-10-09T16:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:15:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:15:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3019 "" "Go-http-client/1.1"
Oct 09 16:15:29 compute-0 nova_compute[117331]: 2025-10-09 16:15:29.782 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.092s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:15:29 compute-0 nova_compute[117331]: 2025-10-09 16:15:29.783 2 DEBUG nova.compute.manager [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:15:29 compute-0 nova_compute[117331]: 2025-10-09 16:15:29.786 2 DEBUG oslo_concurrency.lockutils [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 2.008s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:15:30 compute-0 nova_compute[117331]: 2025-10-09 16:15:30.295 2 DEBUG nova.compute.manager [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:15:30 compute-0 nova_compute[117331]: 2025-10-09 16:15:30.296 2 DEBUG nova.network.neutron [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:15:30 compute-0 nova_compute[117331]: 2025-10-09 16:15:30.296 2 WARNING neutronclient.v2_0.client [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:15:30 compute-0 nova_compute[117331]: 2025-10-09 16:15:30.297 2 WARNING neutronclient.v2_0.client [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:15:30 compute-0 nova_compute[117331]: 2025-10-09 16:15:30.346 2 DEBUG nova.compute.provider_tree [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:15:30 compute-0 nova_compute[117331]: 2025-10-09 16:15:30.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:30 compute-0 nova_compute[117331]: 2025-10-09 16:15:30.805 2 INFO nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:15:30 compute-0 nova_compute[117331]: 2025-10-09 16:15:30.852 2 DEBUG nova.scheduler.client.report [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:15:31 compute-0 nova_compute[117331]: 2025-10-09 16:15:31.204 2 DEBUG nova.network.neutron [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Successfully created port: a5c38ba6-3efb-4090-a42b-1dd250959041 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:15:31 compute-0 nova_compute[117331]: 2025-10-09 16:15:31.313 2 DEBUG nova.compute.manager [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:15:31 compute-0 openstack_network_exporter[129925]: ERROR   16:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:15:31 compute-0 openstack_network_exporter[129925]: ERROR   16:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:15:31 compute-0 openstack_network_exporter[129925]: ERROR   16:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:15:31 compute-0 openstack_network_exporter[129925]: ERROR   16:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:15:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:15:31 compute-0 openstack_network_exporter[129925]: ERROR   16:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:15:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:15:31 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:57624 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:15:31 compute-0 nova_compute[117331]: 2025-10-09 16:15:31.959 2 DEBUG oslo_concurrency.lockutils [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 2.173s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.017 2 DEBUG nova.network.neutron [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Successfully updated port: a5c38ba6-3efb-4090-a42b-1dd250959041 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.068 2 DEBUG nova.compute.manager [req-45785bbc-64e6-4641-b842-996dd6efa02e req-6f124065-9e0b-4ff0-b5bc-7bc8173ce705 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received event network-changed-a5c38ba6-3efb-4090-a42b-1dd250959041 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.068 2 DEBUG nova.compute.manager [req-45785bbc-64e6-4641-b842-996dd6efa02e req-6f124065-9e0b-4ff0-b5bc-7bc8173ce705 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Refreshing instance network info cache due to event network-changed-a5c38ba6-3efb-4090-a42b-1dd250959041. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.069 2 DEBUG oslo_concurrency.lockutils [req-45785bbc-64e6-4641-b842-996dd6efa02e req-6f124065-9e0b-4ff0-b5bc-7bc8173ce705 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-70a451f7-5bde-42f1-a448-d48b3b24d9d6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.069 2 DEBUG oslo_concurrency.lockutils [req-45785bbc-64e6-4641-b842-996dd6efa02e req-6f124065-9e0b-4ff0-b5bc-7bc8173ce705 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-70a451f7-5bde-42f1-a448-d48b3b24d9d6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.069 2 DEBUG nova.network.neutron [req-45785bbc-64e6-4641-b842-996dd6efa02e req-6f124065-9e0b-4ff0-b5bc-7bc8173ce705 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Refreshing network info cache for port a5c38ba6-3efb-4090-a42b-1dd250959041 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.330 2 DEBUG nova.compute.manager [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.331 2 DEBUG nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.332 2 INFO nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Creating image(s)
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.332 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "/var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.332 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "/var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.333 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "/var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.333 2 DEBUG oslo_utils.imageutils.format_inspector [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.336 2 DEBUG oslo_utils.imageutils.format_inspector [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.338 2 DEBUG oslo_concurrency.processutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.430 2 DEBUG oslo_concurrency.processutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.431 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.432 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.432 2 DEBUG oslo_utils.imageutils.format_inspector [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.435 2 DEBUG oslo_utils.imageutils.format_inspector [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.435 2 DEBUG oslo_concurrency.processutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.507 2 DEBUG oslo_concurrency.processutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.508 2 DEBUG oslo_concurrency.processutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.525 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "refresh_cache-70a451f7-5bde-42f1-a448-d48b3b24d9d6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.539 2 INFO nova.scheduler.client.report [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Deleted allocation for migration 22e9bfdb-97e9-4f17-9af1-e0d57951d553
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.552 2 DEBUG oslo_concurrency.processutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.552 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.121s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.553 2 DEBUG oslo_concurrency.processutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.574 2 WARNING neutronclient.v2_0.client [req-45785bbc-64e6-4641-b842-996dd6efa02e req-6f124065-9e0b-4ff0-b5bc-7bc8173ce705 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.613 2 DEBUG oslo_concurrency.processutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.614 2 DEBUG nova.virt.disk.api [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Checking if we can resize image /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.614 2 DEBUG oslo_concurrency.processutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.675 2 DEBUG oslo_concurrency.processutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.676 2 DEBUG nova.virt.disk.api [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Cannot resize image /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.677 2 DEBUG nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.677 2 DEBUG nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Ensure instance console log exists: /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.678 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.678 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.678 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.746 2 DEBUG nova.network.neutron [req-45785bbc-64e6-4641-b842-996dd6efa02e req-6f124065-9e0b-4ff0-b5bc-7bc8173ce705 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:15:32 compute-0 nova_compute[117331]: 2025-10-09 16:15:32.886 2 DEBUG nova.network.neutron [req-45785bbc-64e6-4641-b842-996dd6efa02e req-6f124065-9e0b-4ff0-b5bc-7bc8173ce705 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:15:33 compute-0 nova_compute[117331]: 2025-10-09 16:15:33.050 2 DEBUG oslo_concurrency.lockutils [None req-059bcde7-38dd-434c-ac33-56b055931054 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "481b51b1-c134-4c7a-af94-d3d3794a7971" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 9.046s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:15:33 compute-0 nova_compute[117331]: 2025-10-09 16:15:33.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:33 compute-0 nova_compute[117331]: 2025-10-09 16:15:33.392 2 DEBUG oslo_concurrency.lockutils [req-45785bbc-64e6-4641-b842-996dd6efa02e req-6f124065-9e0b-4ff0-b5bc-7bc8173ce705 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-70a451f7-5bde-42f1-a448-d48b3b24d9d6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:15:33 compute-0 nova_compute[117331]: 2025-10-09 16:15:33.394 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquired lock "refresh_cache-70a451f7-5bde-42f1-a448-d48b3b24d9d6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:15:33 compute-0 nova_compute[117331]: 2025-10-09 16:15:33.394 2 DEBUG nova.network.neutron [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:15:34 compute-0 nova_compute[117331]: 2025-10-09 16:15:34.749 2 DEBUG nova.network.neutron [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:15:34 compute-0 nova_compute[117331]: 2025-10-09 16:15:34.922 2 WARNING neutronclient.v2_0.client [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:15:35 compute-0 unix_chkpwd[143110]: password check failed for user (root)
Oct 09 16:15:35 compute-0 sshd-session[143108]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:15:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:35.289 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:15:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:35.290 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:15:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:35.290 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:15:35 compute-0 nova_compute[117331]: 2025-10-09 16:15:35.502 2 DEBUG nova.network.neutron [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Updating instance_info_cache with network_info: [{"id": "a5c38ba6-3efb-4090-a42b-1dd250959041", "address": "fa:16:3e:c9:bd:65", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c38ba6-3e", "ovs_interfaceid": "a5c38ba6-3efb-4090-a42b-1dd250959041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:15:35 compute-0 nova_compute[117331]: 2025-10-09 16:15:35.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.008 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Releasing lock "refresh_cache-70a451f7-5bde-42f1-a448-d48b3b24d9d6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.009 2 DEBUG nova.compute.manager [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Instance network_info: |[{"id": "a5c38ba6-3efb-4090-a42b-1dd250959041", "address": "fa:16:3e:c9:bd:65", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c38ba6-3e", "ovs_interfaceid": "a5c38ba6-3efb-4090-a42b-1dd250959041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.011 2 DEBUG nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Start _get_guest_xml network_info=[{"id": "a5c38ba6-3efb-4090-a42b-1dd250959041", "address": "fa:16:3e:c9:bd:65", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c38ba6-3e", "ovs_interfaceid": "a5c38ba6-3efb-4090-a42b-1dd250959041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.015 2 WARNING nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.016 2 DEBUG nova.virt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteActionsViaActuator-server-1017152878', uuid='70a451f7-5bde-42f1-a448-d48b3b24d9d6'), owner=OwnerMeta(userid='e5044998ddc3419bb14cc08417add581', username='tempest-TestExecuteActionsViaActuator-1347788182-project-admin', projectid='be4e2f9059cd48f5b44a612256e3fc7b', projectname='tempest-TestExecuteActionsViaActuator-1347788182'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "a5c38ba6-3efb-4090-a42b-1dd250959041", "address": "fa:16:3e:c9:bd:65", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c38ba6-3e", "ovs_interfaceid": "a5c38ba6-3efb-4090-a42b-1dd250959041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760026536.0167542) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.021 2 DEBUG nova.virt.libvirt.host [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.022 2 DEBUG nova.virt.libvirt.host [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.025 2 DEBUG nova.virt.libvirt.host [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.026 2 DEBUG nova.virt.libvirt.host [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.026 2 DEBUG nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.027 2 DEBUG nova.virt.hardware [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.027 2 DEBUG nova.virt.hardware [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.027 2 DEBUG nova.virt.hardware [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.028 2 DEBUG nova.virt.hardware [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.028 2 DEBUG nova.virt.hardware [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.028 2 DEBUG nova.virt.hardware [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.029 2 DEBUG nova.virt.hardware [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.029 2 DEBUG nova.virt.hardware [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.029 2 DEBUG nova.virt.hardware [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.029 2 DEBUG nova.virt.hardware [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.030 2 DEBUG nova.virt.hardware [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.034 2 DEBUG nova.virt.libvirt.vif [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:15:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1017152878',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1017152878',id=6,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be4e2f9059cd48f5b44a612256e3fc7b',ramdisk_id='',reservation_id='r-lo5hd093',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-1347788182',owner_user_name='tempest-TestExecuteActionsViaActuator-1347788182-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:15:31Z,user_data=None,user_id='e5044998ddc3419bb14cc08417add581',uuid=70a451f7-5bde-42f1-a448-d48b3b24d9d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5c38ba6-3efb-4090-a42b-1dd250959041", "address": "fa:16:3e:c9:bd:65", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c38ba6-3e", "ovs_interfaceid": "a5c38ba6-3efb-4090-a42b-1dd250959041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.035 2 DEBUG nova.network.os_vif_util [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Converting VIF {"id": "a5c38ba6-3efb-4090-a42b-1dd250959041", "address": "fa:16:3e:c9:bd:65", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c38ba6-3e", "ovs_interfaceid": "a5c38ba6-3efb-4090-a42b-1dd250959041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.036 2 DEBUG nova.network.os_vif_util [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:bd:65,bridge_name='br-int',has_traffic_filtering=True,id=a5c38ba6-3efb-4090-a42b-1dd250959041,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c38ba6-3e') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.037 2 DEBUG nova.objects.instance [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lazy-loading 'pci_devices' on Instance uuid 70a451f7-5bde-42f1-a448-d48b3b24d9d6 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.544 2 DEBUG nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:15:36 compute-0 nova_compute[117331]:   <uuid>70a451f7-5bde-42f1-a448-d48b3b24d9d6</uuid>
Oct 09 16:15:36 compute-0 nova_compute[117331]:   <name>instance-00000006</name>
Oct 09 16:15:36 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:15:36 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:15:36 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteActionsViaActuator-server-1017152878</nova:name>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:15:36</nova:creationTime>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:15:36 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:15:36 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:15:36 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:15:36 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:15:36 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:15:36 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:15:36 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:15:36 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:15:36 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:15:36 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:15:36 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:15:36 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:15:36 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:15:36 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:15:36 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:15:36 compute-0 nova_compute[117331]:         <nova:user uuid="e5044998ddc3419bb14cc08417add581">tempest-TestExecuteActionsViaActuator-1347788182-project-admin</nova:user>
Oct 09 16:15:36 compute-0 nova_compute[117331]:         <nova:project uuid="be4e2f9059cd48f5b44a612256e3fc7b">tempest-TestExecuteActionsViaActuator-1347788182</nova:project>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:15:36 compute-0 nova_compute[117331]:         <nova:port uuid="a5c38ba6-3efb-4090-a42b-1dd250959041">
Oct 09 16:15:36 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:15:36 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:15:36 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <system>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <entry name="serial">70a451f7-5bde-42f1-a448-d48b3b24d9d6</entry>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <entry name="uuid">70a451f7-5bde-42f1-a448-d48b3b24d9d6</entry>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     </system>
Oct 09 16:15:36 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:15:36 compute-0 nova_compute[117331]:   <os>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:   </os>
Oct 09 16:15:36 compute-0 nova_compute[117331]:   <features>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:   </features>
Oct 09 16:15:36 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:15:36 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:15:36 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk.config"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:c9:bd:65"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <target dev="tapa5c38ba6-3e"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/console.log" append="off"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <video>
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     </video>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:15:36 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:15:36 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:15:36 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:15:36 compute-0 nova_compute[117331]: </domain>
Oct 09 16:15:36 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.546 2 DEBUG nova.compute.manager [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Preparing to wait for external event network-vif-plugged-a5c38ba6-3efb-4090-a42b-1dd250959041 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.546 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.546 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.547 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.547 2 DEBUG nova.virt.libvirt.vif [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:15:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1017152878',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1017152878',id=6,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be4e2f9059cd48f5b44a612256e3fc7b',ramdisk_id='',reservation_id='r-lo5hd093',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-1347788182',owner_user_name='tempest-TestExecuteActionsViaActuator-1347788182-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:15:31Z,user_data=None,user_id='e5044998ddc3419bb14cc08417add581',uuid=70a451f7-5bde-42f1-a448-d48b3b24d9d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5c38ba6-3efb-4090-a42b-1dd250959041", "address": "fa:16:3e:c9:bd:65", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c38ba6-3e", "ovs_interfaceid": "a5c38ba6-3efb-4090-a42b-1dd250959041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.548 2 DEBUG nova.network.os_vif_util [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Converting VIF {"id": "a5c38ba6-3efb-4090-a42b-1dd250959041", "address": "fa:16:3e:c9:bd:65", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c38ba6-3e", "ovs_interfaceid": "a5c38ba6-3efb-4090-a42b-1dd250959041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.548 2 DEBUG nova.network.os_vif_util [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:bd:65,bridge_name='br-int',has_traffic_filtering=True,id=a5c38ba6-3efb-4090-a42b-1dd250959041,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c38ba6-3e') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.549 2 DEBUG os_vif [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:bd:65,bridge_name='br-int',has_traffic_filtering=True,id=a5c38ba6-3efb-4090-a42b-1dd250959041,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c38ba6-3e') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.549 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.550 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.550 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'a1adfd76-d502-53ad-b5e3-b79ad5e19c92', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.556 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5c38ba6-3e, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.557 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapa5c38ba6-3e, col_values=(('qos', UUID('c685cc5c-8281-41c1-a5f9-dfe9d560c7a2')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.557 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapa5c38ba6-3e, col_values=(('external_ids', {'iface-id': 'a5c38ba6-3efb-4090-a42b-1dd250959041', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:bd:65', 'vm-uuid': '70a451f7-5bde-42f1-a448-d48b3b24d9d6'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:15:36 compute-0 NetworkManager[1028]: <info>  [1760026536.5607] manager: (tapa5c38ba6-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:36 compute-0 nova_compute[117331]: 2025-10-09 16:15:36.572 2 INFO os_vif [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:bd:65,bridge_name='br-int',has_traffic_filtering=True,id=a5c38ba6-3efb-4090-a42b-1dd250959041,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c38ba6-3e')
Oct 09 16:15:37 compute-0 sshd-session[143108]: Failed password for root from 134.199.199.215 port 57626 ssh2
Oct 09 16:15:37 compute-0 podman[143114]: 2025-10-09 16:15:37.839357677 +0000 UTC m=+0.069492648 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, vcs-type=git, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, distribution-scope=public, io.openshift.expose-services=)
Oct 09 16:15:38 compute-0 nova_compute[117331]: 2025-10-09 16:15:38.113 2 DEBUG nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:15:38 compute-0 nova_compute[117331]: 2025-10-09 16:15:38.113 2 DEBUG nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:15:38 compute-0 nova_compute[117331]: 2025-10-09 16:15:38.114 2 DEBUG nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] No VIF found with MAC fa:16:3e:c9:bd:65, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:15:38 compute-0 nova_compute[117331]: 2025-10-09 16:15:38.114 2 INFO nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Using config drive
Oct 09 16:15:38 compute-0 sshd-session[143135]: Invalid user myuser from 134.199.199.215 port 45816
Oct 09 16:15:38 compute-0 sshd-session[143135]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:15:38 compute-0 sshd-session[143135]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:15:38 compute-0 nova_compute[117331]: 2025-10-09 16:15:38.624 2 WARNING neutronclient.v2_0.client [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:15:38 compute-0 nova_compute[117331]: 2025-10-09 16:15:38.811 2 INFO nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Creating config drive at /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk.config
Oct 09 16:15:38 compute-0 nova_compute[117331]: 2025-10-09 16:15:38.816 2 DEBUG oslo_concurrency.processutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmp_pllj9mt execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:15:38 compute-0 nova_compute[117331]: 2025-10-09 16:15:38.948 2 DEBUG oslo_concurrency.processutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmp_pllj9mt" returned: 0 in 0.132s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:15:39 compute-0 kernel: tapa5c38ba6-3e: entered promiscuous mode
Oct 09 16:15:39 compute-0 ovn_controller[19752]: 2025-10-09T16:15:39Z|00075|binding|INFO|Claiming lport a5c38ba6-3efb-4090-a42b-1dd250959041 for this chassis.
Oct 09 16:15:39 compute-0 ovn_controller[19752]: 2025-10-09T16:15:39Z|00076|binding|INFO|a5c38ba6-3efb-4090-a42b-1dd250959041: Claiming fa:16:3e:c9:bd:65 10.100.0.6
Oct 09 16:15:39 compute-0 nova_compute[117331]: 2025-10-09 16:15:39.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:39 compute-0 NetworkManager[1028]: <info>  [1760026539.0142] manager: (tapa5c38ba6-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.018 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:bd:65 10.100.0.6'], port_security=['fa:16:3e:c9:bd:65 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '70a451f7-5bde-42f1-a448-d48b3b24d9d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be4e2f9059cd48f5b44a612256e3fc7b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '92d94554-4438-4b2d-9e48-eb178205b5a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8754009b-0d8f-4d41-af4c-a82e7b57a4f6, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=a5c38ba6-3efb-4090-a42b-1dd250959041) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.019 28613 INFO neutron.agent.ovn.metadata.agent [-] Port a5c38ba6-3efb-4090-a42b-1dd250959041 in datapath 7ed4244a-b510-4df6-9ffd-2f86603932fc bound to our chassis
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.020 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ed4244a-b510-4df6-9ffd-2f86603932fc
Oct 09 16:15:39 compute-0 ovn_controller[19752]: 2025-10-09T16:15:39Z|00077|binding|INFO|Setting lport a5c38ba6-3efb-4090-a42b-1dd250959041 ovn-installed in OVS
Oct 09 16:15:39 compute-0 ovn_controller[19752]: 2025-10-09T16:15:39Z|00078|binding|INFO|Setting lport a5c38ba6-3efb-4090-a42b-1dd250959041 up in Southbound
Oct 09 16:15:39 compute-0 nova_compute[117331]: 2025-10-09 16:15:39.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:39 compute-0 nova_compute[117331]: 2025-10-09 16:15:39.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.042 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[be9eb279-2e60-42ca-b4ba-31cc5f0209e5]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.043 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7ed4244a-b1 in ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.045 139687 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7ed4244a-b0 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.046 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ff4f91ee-5cf0-4c3d-b69e-f109a0f248b0]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.046 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd65a56-3168-4621-bc36-a3cd76260c99]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:39 compute-0 systemd-machined[77487]: New machine qemu-4-instance-00000006.
Oct 09 16:15:39 compute-0 systemd-udevd[143156]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:15:39 compute-0 NetworkManager[1028]: <info>  [1760026539.0618] device (tapa5c38ba6-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:15:39 compute-0 NetworkManager[1028]: <info>  [1760026539.0627] device (tapa5c38ba6-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.065 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[4b8917a7-456b-47f6-b91a-695eaad481fc]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:39 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000006.
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.084 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[61f8f197-609d-4165-98cc-f7cbeb925adb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.115 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[4fd342c0-f57c-437b-8e9e-08dbb210d561]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.120 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[61e311c6-9904-493e-9b50-4505ee9d6d5a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:39 compute-0 NetworkManager[1028]: <info>  [1760026539.1216] manager: (tap7ed4244a-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Oct 09 16:15:39 compute-0 systemd-udevd[143159]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.157 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[7d960dd2-3998-453e-96fd-5e97f3796c39]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:39 compute-0 nova_compute[117331]: 2025-10-09 16:15:39.159 2 DEBUG nova.compute.manager [req-71c010ec-f3c3-4398-b919-e63f32ecc787 req-bd8b4b42-4e68-464b-b4ab-cc9cea17cd1f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received event network-vif-plugged-a5c38ba6-3efb-4090-a42b-1dd250959041 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:15:39 compute-0 nova_compute[117331]: 2025-10-09 16:15:39.159 2 DEBUG oslo_concurrency.lockutils [req-71c010ec-f3c3-4398-b919-e63f32ecc787 req-bd8b4b42-4e68-464b-b4ab-cc9cea17cd1f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:15:39 compute-0 nova_compute[117331]: 2025-10-09 16:15:39.159 2 DEBUG oslo_concurrency.lockutils [req-71c010ec-f3c3-4398-b919-e63f32ecc787 req-bd8b4b42-4e68-464b-b4ab-cc9cea17cd1f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:15:39 compute-0 nova_compute[117331]: 2025-10-09 16:15:39.160 2 DEBUG oslo_concurrency.lockutils [req-71c010ec-f3c3-4398-b919-e63f32ecc787 req-bd8b4b42-4e68-464b-b4ab-cc9cea17cd1f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.159 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[b5132d5a-e8ff-490e-bee6-399a196a2cb1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:39 compute-0 nova_compute[117331]: 2025-10-09 16:15:39.160 2 DEBUG nova.compute.manager [req-71c010ec-f3c3-4398-b919-e63f32ecc787 req-bd8b4b42-4e68-464b-b4ab-cc9cea17cd1f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Processing event network-vif-plugged-a5c38ba6-3efb-4090-a42b-1dd250959041 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:15:39 compute-0 NetworkManager[1028]: <info>  [1760026539.1829] device (tap7ed4244a-b0): carrier: link connected
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.189 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[5b30b35d-0056-4c42-ad06-259113a412c5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.206 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[e96592ed-df70-4a61-af46-9d298dcce21f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ed4244a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:ef:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 148278, 'reachable_time': 15142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 143188, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.221 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[c0eeecff-8c37-44f3-be2c-d4bc551d2744]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:ef1e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 148278, 'tstamp': 148278}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143189, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.238 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[897c9f3e-c00e-413b-a1f2-d6121e5ec4a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ed4244a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:ef:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 148278, 'reachable_time': 15142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 143190, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.270 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[dc24d441-9565-4510-9a7e-cb52929e5336]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.337 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[e31ffa1b-641f-4c43-b59b-12b2c86c91b7]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.340 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ed4244a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.340 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.341 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ed4244a-b0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:15:39 compute-0 nova_compute[117331]: 2025-10-09 16:15:39.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:39 compute-0 NetworkManager[1028]: <info>  [1760026539.3435] manager: (tap7ed4244a-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Oct 09 16:15:39 compute-0 kernel: tap7ed4244a-b0: entered promiscuous mode
Oct 09 16:15:39 compute-0 nova_compute[117331]: 2025-10-09 16:15:39.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.345 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ed4244a-b0, col_values=(('external_ids', {'iface-id': 'fa7fd37b-f1db-4203-966c-06eaa6fa3892'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:15:39 compute-0 nova_compute[117331]: 2025-10-09 16:15:39.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:39 compute-0 ovn_controller[19752]: 2025-10-09T16:15:39Z|00079|binding|INFO|Releasing lport fa7fd37b-f1db-4203-966c-06eaa6fa3892 from this chassis (sb_readonly=0)
Oct 09 16:15:39 compute-0 nova_compute[117331]: 2025-10-09 16:15:39.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.359 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[2a0bc9b1-8cbe-47df-8c97-2ea8b84a59b3]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.359 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.359 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.360 28613 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 7ed4244a-b510-4df6-9ffd-2f86603932fc disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.360 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.360 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[42ea487b-e55a-4293-89af-08272afaf64f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.360 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.360 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[8dec3170-f454-41dd-9009-186c59303ac8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.361 28613 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: global
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     log         /dev/log local0 debug
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     log-tag     haproxy-metadata-proxy-7ed4244a-b510-4df6-9ffd-2f86603932fc
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     user        root
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     group       root
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     maxconn     1024
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     pidfile     /var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     daemon
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: defaults
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     log global
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     mode http
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     option httplog
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     option dontlognull
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     option http-server-close
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     option forwardfor
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     retries                 3
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     timeout http-request    30s
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     timeout connect         30s
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     timeout client          32s
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     timeout server          32s
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     timeout http-keep-alive 30s
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: listen listener
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     bind 169.254.169.254:80
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:     http-request add-header X-OVN-Network-ID 7ed4244a-b510-4df6-9ffd-2f86603932fc
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 09 16:15:39 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:15:39.361 28613 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'env', 'PROCESS_TAG=haproxy-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7ed4244a-b510-4df6-9ffd-2f86603932fc.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 09 16:15:39 compute-0 sshd-session[143108]: Connection closed by authenticating user root 134.199.199.215 port 57626 [preauth]
Oct 09 16:15:39 compute-0 podman[143229]: 2025-10-09 16:15:39.737905326 +0000 UTC m=+0.049912793 container create a06e658ab15bf037bd9c283806f8b85dfc8e0f2a11eb32e297b405fb3a83fde5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Oct 09 16:15:39 compute-0 systemd[1]: Started libpod-conmon-a06e658ab15bf037bd9c283806f8b85dfc8e0f2a11eb32e297b405fb3a83fde5.scope.
Oct 09 16:15:39 compute-0 podman[143229]: 2025-10-09 16:15:39.709841691 +0000 UTC m=+0.021849208 image pull 4e15c189a95c5dcd1203c76fe304de5c8f8616da9a258ca168cc54a517f49b87 38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 09 16:15:39 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:15:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04403df32ea53d49b05f96acc6f4a749bd5b2d8a3109edac594f245aa6cc0646/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 16:15:39 compute-0 podman[143229]: 2025-10-09 16:15:39.824625241 +0000 UTC m=+0.136632738 container init a06e658ab15bf037bd9c283806f8b85dfc8e0f2a11eb32e297b405fb3a83fde5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007)
Oct 09 16:15:39 compute-0 podman[143229]: 2025-10-09 16:15:39.829645381 +0000 UTC m=+0.141652838 container start a06e658ab15bf037bd9c283806f8b85dfc8e0f2a11eb32e297b405fb3a83fde5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007)
Oct 09 16:15:39 compute-0 neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc[143245]: [NOTICE]   (143249) : New worker (143251) forked
Oct 09 16:15:39 compute-0 neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc[143245]: [NOTICE]   (143249) : Loading success.
Oct 09 16:15:40 compute-0 nova_compute[117331]: 2025-10-09 16:15:40.001 2 DEBUG nova.compute.manager [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:15:40 compute-0 nova_compute[117331]: 2025-10-09 16:15:40.004 2 DEBUG nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:15:40 compute-0 nova_compute[117331]: 2025-10-09 16:15:40.007 2 INFO nova.virt.libvirt.driver [-] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Instance spawned successfully.
Oct 09 16:15:40 compute-0 nova_compute[117331]: 2025-10-09 16:15:40.007 2 DEBUG nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:15:40 compute-0 nova_compute[117331]: 2025-10-09 16:15:40.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:40 compute-0 nova_compute[117331]: 2025-10-09 16:15:40.531 2 DEBUG nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:15:40 compute-0 nova_compute[117331]: 2025-10-09 16:15:40.532 2 DEBUG nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:15:40 compute-0 nova_compute[117331]: 2025-10-09 16:15:40.532 2 DEBUG nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:15:40 compute-0 nova_compute[117331]: 2025-10-09 16:15:40.532 2 DEBUG nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:15:40 compute-0 nova_compute[117331]: 2025-10-09 16:15:40.533 2 DEBUG nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:15:40 compute-0 nova_compute[117331]: 2025-10-09 16:15:40.533 2 DEBUG nova.virt.libvirt.driver [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:15:40 compute-0 sshd-session[143135]: Failed password for invalid user myuser from 134.199.199.215 port 45816 ssh2
Oct 09 16:15:40 compute-0 podman[143260]: 2025-10-09 16:15:40.855823443 +0000 UTC m=+0.085934361 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 16:15:41 compute-0 nova_compute[117331]: 2025-10-09 16:15:41.041 2 INFO nova.compute.manager [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Took 8.71 seconds to spawn the instance on the hypervisor.
Oct 09 16:15:41 compute-0 nova_compute[117331]: 2025-10-09 16:15:41.041 2 DEBUG nova.compute.manager [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:15:41 compute-0 nova_compute[117331]: 2025-10-09 16:15:41.214 2 DEBUG nova.compute.manager [req-91844b66-771c-415b-85b7-42153b5116be req-1a647711-c289-41a1-bd54-3c0296a97afa ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received event network-vif-plugged-a5c38ba6-3efb-4090-a42b-1dd250959041 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:15:41 compute-0 nova_compute[117331]: 2025-10-09 16:15:41.215 2 DEBUG oslo_concurrency.lockutils [req-91844b66-771c-415b-85b7-42153b5116be req-1a647711-c289-41a1-bd54-3c0296a97afa ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:15:41 compute-0 nova_compute[117331]: 2025-10-09 16:15:41.215 2 DEBUG oslo_concurrency.lockutils [req-91844b66-771c-415b-85b7-42153b5116be req-1a647711-c289-41a1-bd54-3c0296a97afa ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:15:41 compute-0 nova_compute[117331]: 2025-10-09 16:15:41.215 2 DEBUG oslo_concurrency.lockutils [req-91844b66-771c-415b-85b7-42153b5116be req-1a647711-c289-41a1-bd54-3c0296a97afa ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:15:41 compute-0 nova_compute[117331]: 2025-10-09 16:15:41.215 2 DEBUG nova.compute.manager [req-91844b66-771c-415b-85b7-42153b5116be req-1a647711-c289-41a1-bd54-3c0296a97afa ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] No waiting events found dispatching network-vif-plugged-a5c38ba6-3efb-4090-a42b-1dd250959041 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:15:41 compute-0 nova_compute[117331]: 2025-10-09 16:15:41.216 2 WARNING nova.compute.manager [req-91844b66-771c-415b-85b7-42153b5116be req-1a647711-c289-41a1-bd54-3c0296a97afa ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received unexpected event network-vif-plugged-a5c38ba6-3efb-4090-a42b-1dd250959041 for instance with vm_state active and task_state None.
Oct 09 16:15:41 compute-0 nova_compute[117331]: 2025-10-09 16:15:41.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:41 compute-0 nova_compute[117331]: 2025-10-09 16:15:41.567 2 INFO nova.compute.manager [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Took 13.92 seconds to build instance.
Oct 09 16:15:41 compute-0 sshd-session[143288]: Invalid user es from 134.199.199.215 port 45824
Oct 09 16:15:41 compute-0 sshd-session[143288]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:15:41 compute-0 sshd-session[143288]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:15:42 compute-0 nova_compute[117331]: 2025-10-09 16:15:42.071 2 DEBUG oslo_concurrency.lockutils [None req-647b20c2-3f27-4bd8-aa1a-6180cdfd55a2 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.440s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:15:42 compute-0 sshd-session[143135]: Connection closed by invalid user myuser 134.199.199.215 port 45816 [preauth]
Oct 09 16:15:44 compute-0 sshd-session[143288]: Failed password for invalid user es from 134.199.199.215 port 45824 ssh2
Oct 09 16:15:45 compute-0 unix_chkpwd[143292]: password check failed for user (root)
Oct 09 16:15:45 compute-0 sshd-session[143290]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:15:45 compute-0 nova_compute[117331]: 2025-10-09 16:15:45.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:46 compute-0 sshd-session[143288]: Connection closed by invalid user es 134.199.199.215 port 45824 [preauth]
Oct 09 16:15:46 compute-0 nova_compute[117331]: 2025-10-09 16:15:46.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:47 compute-0 sshd-session[143290]: Failed password for root from 134.199.199.215 port 45826 ssh2
Oct 09 16:15:48 compute-0 unix_chkpwd[143295]: password check failed for user (root)
Oct 09 16:15:48 compute-0 sshd-session[143293]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:15:49 compute-0 sshd-session[143290]: Connection closed by authenticating user root 134.199.199.215 port 45826 [preauth]
Oct 09 16:15:49 compute-0 podman[143296]: 2025-10-09 16:15:49.821219843 +0000 UTC m=+0.054055805 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 09 16:15:50 compute-0 nova_compute[117331]: 2025-10-09 16:15:50.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:50 compute-0 sshd-session[143293]: Failed password for root from 134.199.199.215 port 58844 ssh2
Oct 09 16:15:50 compute-0 sshd-session[143293]: Connection closed by authenticating user root 134.199.199.215 port 58844 [preauth]
Oct 09 16:15:51 compute-0 nova_compute[117331]: 2025-10-09 16:15:51.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:51 compute-0 ovn_controller[19752]: 2025-10-09T16:15:51Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c9:bd:65 10.100.0.6
Oct 09 16:15:51 compute-0 ovn_controller[19752]: 2025-10-09T16:15:51Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c9:bd:65 10.100.0.6
Oct 09 16:15:52 compute-0 sshd-session[143334]: Invalid user test from 134.199.199.215 port 58860
Oct 09 16:15:52 compute-0 sshd-session[143334]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:15:52 compute-0 sshd-session[143334]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:15:54 compute-0 sshd-session[143334]: Failed password for invalid user test from 134.199.199.215 port 58860 ssh2
Oct 09 16:15:54 compute-0 podman[143336]: 2025-10-09 16:15:54.834417199 +0000 UTC m=+0.059196139 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:15:54 compute-0 sshd-session[143334]: Connection closed by invalid user test 134.199.199.215 port 58860 [preauth]
Oct 09 16:15:55 compute-0 sshd-session[143360]: Invalid user tom from 134.199.199.215 port 58866
Oct 09 16:15:55 compute-0 sshd-session[143360]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:15:55 compute-0 sshd-session[143360]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:15:55 compute-0 nova_compute[117331]: 2025-10-09 16:15:55.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:56 compute-0 nova_compute[117331]: 2025-10-09 16:15:56.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:15:57 compute-0 sshd-session[143360]: Failed password for invalid user tom from 134.199.199.215 port 58866 ssh2
Oct 09 16:15:57 compute-0 sshd-session[143360]: Connection closed by invalid user tom 134.199.199.215 port 58866 [preauth]
Oct 09 16:15:58 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:39168 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:15:58 compute-0 podman[143363]: 2025-10-09 16:15:58.845167181 +0000 UTC m=+0.068870507 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 09 16:15:58 compute-0 podman[143362]: 2025-10-09 16:15:58.853680762 +0000 UTC m=+0.081426967 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 09 16:15:59 compute-0 podman[127775]: time="2025-10-09T16:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:15:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:15:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3482 "" "Go-http-client/1.1"
Oct 09 16:16:00 compute-0 nova_compute[117331]: 2025-10-09 16:16:00.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:01 compute-0 openstack_network_exporter[129925]: ERROR   16:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:16:01 compute-0 openstack_network_exporter[129925]: ERROR   16:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:16:01 compute-0 openstack_network_exporter[129925]: ERROR   16:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:16:01 compute-0 openstack_network_exporter[129925]: ERROR   16:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:16:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:16:01 compute-0 openstack_network_exporter[129925]: ERROR   16:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:16:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:16:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:01.573 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:16:01 compute-0 nova_compute[117331]: 2025-10-09 16:16:01.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:01.576 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:16:01 compute-0 nova_compute[117331]: 2025-10-09 16:16:01.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:01 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:39172 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:16:05 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:39176 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:16:05 compute-0 nova_compute[117331]: 2025-10-09 16:16:05.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:05.577 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:16:06 compute-0 nova_compute[117331]: 2025-10-09 16:16:06.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:08 compute-0 nova_compute[117331]: 2025-10-09 16:16:08.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:16:08 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:36424 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:16:08 compute-0 podman[143400]: 2025-10-09 16:16:08.830301942 +0000 UTC m=+0.061249848 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, version=9.6, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1755695350)
Oct 09 16:16:09 compute-0 nova_compute[117331]: 2025-10-09 16:16:09.513 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "12702257-b2eb-4842-bdf8-25e7a3b20038" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:16:09 compute-0 nova_compute[117331]: 2025-10-09 16:16:09.514 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:16:10 compute-0 nova_compute[117331]: 2025-10-09 16:16:10.019 2 DEBUG nova.compute.manager [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:16:10 compute-0 nova_compute[117331]: 2025-10-09 16:16:10.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:16:10 compute-0 nova_compute[117331]: 2025-10-09 16:16:10.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:10 compute-0 nova_compute[117331]: 2025-10-09 16:16:10.581 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:16:10 compute-0 nova_compute[117331]: 2025-10-09 16:16:10.582 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:16:10 compute-0 nova_compute[117331]: 2025-10-09 16:16:10.589 2 DEBUG nova.virt.hardware [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:16:10 compute-0 nova_compute[117331]: 2025-10-09 16:16:10.590 2 INFO nova.compute.claims [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:16:10 compute-0 nova_compute[117331]: 2025-10-09 16:16:10.832 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:16:11 compute-0 nova_compute[117331]: 2025-10-09 16:16:11.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:11 compute-0 nova_compute[117331]: 2025-10-09 16:16:11.621 2 DEBUG nova.scheduler.client.report [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Refreshing inventories for resource provider 593051b8-2000-437f-a915-2616fc8b1671 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 09 16:16:11 compute-0 nova_compute[117331]: 2025-10-09 16:16:11.633 2 DEBUG nova.scheduler.client.report [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Updating ProviderTree inventory for provider 593051b8-2000-437f-a915-2616fc8b1671 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 09 16:16:11 compute-0 nova_compute[117331]: 2025-10-09 16:16:11.634 2 DEBUG nova.compute.provider_tree [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Updating inventory in ProviderTree for provider 593051b8-2000-437f-a915-2616fc8b1671 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 09 16:16:11 compute-0 nova_compute[117331]: 2025-10-09 16:16:11.646 2 DEBUG nova.scheduler.client.report [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Refreshing aggregate associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 09 16:16:11 compute-0 nova_compute[117331]: 2025-10-09 16:16:11.662 2 DEBUG nova.scheduler.client.report [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Refreshing trait associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, traits: HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ARCH_X86_64,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_TIS,HW_ARCH_X86_64,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOUND_MODEL_AC97,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_RESCUE_BFV,COMPUTE_SOUND_MODEL_SB16,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 09 16:16:11 compute-0 nova_compute[117331]: 2025-10-09 16:16:11.708 2 DEBUG nova.compute.provider_tree [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:16:11 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:36438 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:16:11 compute-0 podman[143421]: 2025-10-09 16:16:11.8847349 +0000 UTC m=+0.104991719 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251007, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Oct 09 16:16:12 compute-0 nova_compute[117331]: 2025-10-09 16:16:12.215 2 DEBUG nova.scheduler.client.report [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:16:12 compute-0 nova_compute[117331]: 2025-10-09 16:16:12.727 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.145s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:16:12 compute-0 nova_compute[117331]: 2025-10-09 16:16:12.728 2 DEBUG nova.compute.manager [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:16:12 compute-0 nova_compute[117331]: 2025-10-09 16:16:12.731 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 1.898s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:16:12 compute-0 nova_compute[117331]: 2025-10-09 16:16:12.731 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:16:12 compute-0 nova_compute[117331]: 2025-10-09 16:16:12.731 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:16:13 compute-0 nova_compute[117331]: 2025-10-09 16:16:13.239 2 DEBUG nova.compute.manager [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:16:13 compute-0 nova_compute[117331]: 2025-10-09 16:16:13.240 2 DEBUG nova.network.neutron [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:16:13 compute-0 nova_compute[117331]: 2025-10-09 16:16:13.240 2 WARNING neutronclient.v2_0.client [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:16:13 compute-0 nova_compute[117331]: 2025-10-09 16:16:13.240 2 WARNING neutronclient.v2_0.client [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:16:13 compute-0 nova_compute[117331]: 2025-10-09 16:16:13.747 2 INFO nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:16:13 compute-0 nova_compute[117331]: 2025-10-09 16:16:13.780 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:16:13 compute-0 nova_compute[117331]: 2025-10-09 16:16:13.846 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:16:13 compute-0 nova_compute[117331]: 2025-10-09 16:16:13.847 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:16:13 compute-0 nova_compute[117331]: 2025-10-09 16:16:13.938 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:16:14 compute-0 nova_compute[117331]: 2025-10-09 16:16:14.106 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:16:14 compute-0 nova_compute[117331]: 2025-10-09 16:16:14.108 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:16:14 compute-0 nova_compute[117331]: 2025-10-09 16:16:14.135 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.026s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:16:14 compute-0 nova_compute[117331]: 2025-10-09 16:16:14.136 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5972MB free_disk=73.24105072021484GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:16:14 compute-0 nova_compute[117331]: 2025-10-09 16:16:14.136 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:16:14 compute-0 nova_compute[117331]: 2025-10-09 16:16:14.136 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:16:14 compute-0 nova_compute[117331]: 2025-10-09 16:16:14.220 2 DEBUG nova.network.neutron [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Successfully created port: 9ac52a72-3ea9-4d92-8114-f77f5fe13293 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:16:14 compute-0 nova_compute[117331]: 2025-10-09 16:16:14.255 2 DEBUG nova.compute.manager [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.129 2 DEBUG nova.network.neutron [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Successfully updated port: 9ac52a72-3ea9-4d92-8114-f77f5fe13293 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.173 2 DEBUG nova.compute.manager [req-62f8fc26-2a24-4ea8-843b-ae1b0398992b req-80f50659-a390-4c6d-8086-2166de002a78 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received event network-changed-9ac52a72-3ea9-4d92-8114-f77f5fe13293 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.174 2 DEBUG nova.compute.manager [req-62f8fc26-2a24-4ea8-843b-ae1b0398992b req-80f50659-a390-4c6d-8086-2166de002a78 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Refreshing instance network info cache due to event network-changed-9ac52a72-3ea9-4d92-8114-f77f5fe13293. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.174 2 DEBUG oslo_concurrency.lockutils [req-62f8fc26-2a24-4ea8-843b-ae1b0398992b req-80f50659-a390-4c6d-8086-2166de002a78 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-12702257-b2eb-4842-bdf8-25e7a3b20038" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.174 2 DEBUG oslo_concurrency.lockutils [req-62f8fc26-2a24-4ea8-843b-ae1b0398992b req-80f50659-a390-4c6d-8086-2166de002a78 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-12702257-b2eb-4842-bdf8-25e7a3b20038" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.174 2 DEBUG nova.network.neutron [req-62f8fc26-2a24-4ea8-843b-ae1b0398992b req-80f50659-a390-4c6d-8086-2166de002a78 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Refreshing network info cache for port 9ac52a72-3ea9-4d92-8114-f77f5fe13293 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.176 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance 70a451f7-5bde-42f1-a448-d48b3b24d9d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.176 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance 12702257-b2eb-4842-bdf8-25e7a3b20038 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.176 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.177 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:16:14 up 25 min,  0 user,  load average: 0.29, 0.27, 0.30\n', 'num_instances': '2', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '2', 'num_proj_be4e2f9059cd48f5b44a612256e3fc7b': '2', 'io_workload': '1', 'num_vm_building': '1', 'num_task_networking': '1'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.238 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:16:15 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:36452 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.270 2 DEBUG nova.compute.manager [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.272 2 DEBUG nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.273 2 INFO nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Creating image(s)
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.274 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "/var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.274 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "/var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.275 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "/var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.276 2 DEBUG oslo_utils.imageutils.format_inspector [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.281 2 DEBUG oslo_utils.imageutils.format_inspector [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.283 2 DEBUG oslo_concurrency.processutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.340 2 DEBUG oslo_concurrency.processutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.341 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.342 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.342 2 DEBUG oslo_utils.imageutils.format_inspector [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.345 2 DEBUG oslo_utils.imageutils.format_inspector [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.345 2 DEBUG oslo_concurrency.processutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.399 2 DEBUG oslo_concurrency.processutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.400 2 DEBUG oslo_concurrency.processutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.437 2 DEBUG oslo_concurrency.processutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk 1073741824" returned: 0 in 0.037s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.438 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.096s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.438 2 DEBUG oslo_concurrency.processutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.514 2 DEBUG oslo_concurrency.processutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.515 2 DEBUG nova.virt.disk.api [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Checking if we can resize image /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.516 2 DEBUG oslo_concurrency.processutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.582 2 DEBUG oslo_concurrency.processutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.583 2 DEBUG nova.virt.disk.api [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Cannot resize image /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.583 2 DEBUG nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.584 2 DEBUG nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Ensure instance console log exists: /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.584 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.584 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.584 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.635 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "refresh_cache-12702257-b2eb-4842-bdf8-25e7a3b20038" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.685 2 WARNING neutronclient.v2_0.client [req-62f8fc26-2a24-4ea8-843b-ae1b0398992b req-80f50659-a390-4c6d-8086-2166de002a78 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:16:15 compute-0 nova_compute[117331]: 2025-10-09 16:16:15.745 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:16:16 compute-0 nova_compute[117331]: 2025-10-09 16:16:16.255 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:16:16 compute-0 nova_compute[117331]: 2025-10-09 16:16:16.255 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.119s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:16:16 compute-0 nova_compute[117331]: 2025-10-09 16:16:16.310 2 DEBUG nova.network.neutron [req-62f8fc26-2a24-4ea8-843b-ae1b0398992b req-80f50659-a390-4c6d-8086-2166de002a78 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:16:16 compute-0 nova_compute[117331]: 2025-10-09 16:16:16.444 2 DEBUG nova.network.neutron [req-62f8fc26-2a24-4ea8-843b-ae1b0398992b req-80f50659-a390-4c6d-8086-2166de002a78 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:16:16 compute-0 nova_compute[117331]: 2025-10-09 16:16:16.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:16 compute-0 nova_compute[117331]: 2025-10-09 16:16:16.951 2 DEBUG oslo_concurrency.lockutils [req-62f8fc26-2a24-4ea8-843b-ae1b0398992b req-80f50659-a390-4c6d-8086-2166de002a78 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-12702257-b2eb-4842-bdf8-25e7a3b20038" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:16:16 compute-0 nova_compute[117331]: 2025-10-09 16:16:16.952 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquired lock "refresh_cache-12702257-b2eb-4842-bdf8-25e7a3b20038" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:16:16 compute-0 nova_compute[117331]: 2025-10-09 16:16:16.952 2 DEBUG nova.network.neutron [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:16:17 compute-0 nova_compute[117331]: 2025-10-09 16:16:17.254 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:16:17 compute-0 nova_compute[117331]: 2025-10-09 16:16:17.255 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:16:17 compute-0 nova_compute[117331]: 2025-10-09 16:16:17.255 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:16:17 compute-0 nova_compute[117331]: 2025-10-09 16:16:17.255 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:16:17 compute-0 nova_compute[117331]: 2025-10-09 16:16:17.255 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:16:17 compute-0 nova_compute[117331]: 2025-10-09 16:16:17.255 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:16:17 compute-0 nova_compute[117331]: 2025-10-09 16:16:17.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:16:17 compute-0 nova_compute[117331]: 2025-10-09 16:16:17.675 2 DEBUG nova.network.neutron [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:16:17 compute-0 nova_compute[117331]: 2025-10-09 16:16:17.911 2 WARNING neutronclient.v2_0.client [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.085 2 DEBUG nova.network.neutron [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Updating instance_info_cache with network_info: [{"id": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "address": "fa:16:3e:bf:94:75", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac52a72-3e", "ovs_interfaceid": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.302 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.590 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Releasing lock "refresh_cache-12702257-b2eb-4842-bdf8-25e7a3b20038" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.591 2 DEBUG nova.compute.manager [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Instance network_info: |[{"id": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "address": "fa:16:3e:bf:94:75", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac52a72-3e", "ovs_interfaceid": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.593 2 DEBUG nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Start _get_guest_xml network_info=[{"id": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "address": "fa:16:3e:bf:94:75", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac52a72-3e", "ovs_interfaceid": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.597 2 WARNING nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.598 2 DEBUG nova.virt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteActionsViaActuator-server-925081779', uuid='12702257-b2eb-4842-bdf8-25e7a3b20038'), owner=OwnerMeta(userid='e5044998ddc3419bb14cc08417add581', username='tempest-TestExecuteActionsViaActuator-1347788182-project-admin', projectid='be4e2f9059cd48f5b44a612256e3fc7b', projectname='tempest-TestExecuteActionsViaActuator-1347788182'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "address": "fa:16:3e:bf:94:75", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac52a72-3e", "ovs_interfaceid": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760026578.5985472) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.603 2 DEBUG nova.virt.libvirt.host [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.604 2 DEBUG nova.virt.libvirt.host [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.607 2 DEBUG nova.virt.libvirt.host [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.608 2 DEBUG nova.virt.libvirt.host [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.608 2 DEBUG nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.609 2 DEBUG nova.virt.hardware [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.609 2 DEBUG nova.virt.hardware [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.609 2 DEBUG nova.virt.hardware [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.609 2 DEBUG nova.virt.hardware [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.610 2 DEBUG nova.virt.hardware [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.610 2 DEBUG nova.virt.hardware [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.610 2 DEBUG nova.virt.hardware [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.610 2 DEBUG nova.virt.hardware [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.611 2 DEBUG nova.virt.hardware [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.611 2 DEBUG nova.virt.hardware [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.611 2 DEBUG nova.virt.hardware [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.614 2 DEBUG nova.virt.libvirt.vif [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:16:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-925081779',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-925081779',id=8,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be4e2f9059cd48f5b44a612256e3fc7b',ramdisk_id='',reservation_id='r-zkb46622',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-1347788182',owner_user_name='tempest-TestExecuteActionsViaActuator-1347788182-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:16:14Z,user_data=None,user_id='e5044998ddc3419bb14cc08417add581',uuid=12702257-b2eb-4842-bdf8-25e7a3b20038,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "address": "fa:16:3e:bf:94:75", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac52a72-3e", "ovs_interfaceid": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.614 2 DEBUG nova.network.os_vif_util [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Converting VIF {"id": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "address": "fa:16:3e:bf:94:75", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac52a72-3e", "ovs_interfaceid": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.615 2 DEBUG nova.network.os_vif_util [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:94:75,bridge_name='br-int',has_traffic_filtering=True,id=9ac52a72-3ea9-4d92-8114-f77f5fe13293,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ac52a72-3e') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:16:18 compute-0 nova_compute[117331]: 2025-10-09 16:16:18.616 2 DEBUG nova.objects.instance [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lazy-loading 'pci_devices' on Instance uuid 12702257-b2eb-4842-bdf8-25e7a3b20038 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:16:18 compute-0 sshd-session[143470]: Invalid user docker from 134.199.199.215 port 60658
Oct 09 16:16:18 compute-0 sshd-session[143470]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:16:18 compute-0 sshd-session[143470]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.124 2 DEBUG nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:16:19 compute-0 nova_compute[117331]:   <uuid>12702257-b2eb-4842-bdf8-25e7a3b20038</uuid>
Oct 09 16:16:19 compute-0 nova_compute[117331]:   <name>instance-00000008</name>
Oct 09 16:16:19 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:16:19 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:16:19 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteActionsViaActuator-server-925081779</nova:name>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:16:18</nova:creationTime>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:16:19 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:16:19 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:16:19 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:16:19 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:16:19 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:16:19 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:16:19 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:16:19 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:16:19 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:16:19 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:16:19 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:16:19 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:16:19 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:16:19 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:16:19 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:16:19 compute-0 nova_compute[117331]:         <nova:user uuid="e5044998ddc3419bb14cc08417add581">tempest-TestExecuteActionsViaActuator-1347788182-project-admin</nova:user>
Oct 09 16:16:19 compute-0 nova_compute[117331]:         <nova:project uuid="be4e2f9059cd48f5b44a612256e3fc7b">tempest-TestExecuteActionsViaActuator-1347788182</nova:project>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:16:19 compute-0 nova_compute[117331]:         <nova:port uuid="9ac52a72-3ea9-4d92-8114-f77f5fe13293">
Oct 09 16:16:19 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:16:19 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:16:19 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <system>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <entry name="serial">12702257-b2eb-4842-bdf8-25e7a3b20038</entry>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <entry name="uuid">12702257-b2eb-4842-bdf8-25e7a3b20038</entry>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     </system>
Oct 09 16:16:19 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:16:19 compute-0 nova_compute[117331]:   <os>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:   </os>
Oct 09 16:16:19 compute-0 nova_compute[117331]:   <features>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:   </features>
Oct 09 16:16:19 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:16:19 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:16:19 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk.config"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:bf:94:75"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <target dev="tap9ac52a72-3e"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/console.log" append="off"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <video>
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     </video>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:16:19 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:16:19 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:16:19 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:16:19 compute-0 nova_compute[117331]: </domain>
Oct 09 16:16:19 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.126 2 DEBUG nova.compute.manager [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Preparing to wait for external event network-vif-plugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.127 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Acquiring lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.128 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.128 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.129 2 DEBUG nova.virt.libvirt.vif [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:16:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-925081779',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-925081779',id=8,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be4e2f9059cd48f5b44a612256e3fc7b',ramdisk_id='',reservation_id='r-zkb46622',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-1347788182',owner_user_name='tempest-TestExecuteActionsViaActuator-1347788182-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:16:14Z,user_data=None,user_id='e5044998ddc3419bb14cc08417add581',uuid=12702257-b2eb-4842-bdf8-25e7a3b20038,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "address": "fa:16:3e:bf:94:75", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac52a72-3e", "ovs_interfaceid": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.130 2 DEBUG nova.network.os_vif_util [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Converting VIF {"id": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "address": "fa:16:3e:bf:94:75", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac52a72-3e", "ovs_interfaceid": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.131 2 DEBUG nova.network.os_vif_util [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:94:75,bridge_name='br-int',has_traffic_filtering=True,id=9ac52a72-3ea9-4d92-8114-f77f5fe13293,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ac52a72-3e') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.132 2 DEBUG os_vif [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:94:75,bridge_name='br-int',has_traffic_filtering=True,id=9ac52a72-3ea9-4d92-8114-f77f5fe13293,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ac52a72-3e') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.133 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.134 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.136 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '7991e1c9-bed4-563e-8f4a-1765a95cad41', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.143 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ac52a72-3e, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.143 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap9ac52a72-3e, col_values=(('qos', UUID('a8377760-b1e8-4764-be80-edf13f48a379')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.144 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap9ac52a72-3e, col_values=(('external_ids', {'iface-id': '9ac52a72-3ea9-4d92-8114-f77f5fe13293', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:94:75', 'vm-uuid': '12702257-b2eb-4842-bdf8-25e7a3b20038'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:19 compute-0 NetworkManager[1028]: <info>  [1760026579.1474] manager: (tap9ac52a72-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:19 compute-0 nova_compute[117331]: 2025-10-09 16:16:19.153 2 INFO os_vif [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:94:75,bridge_name='br-int',has_traffic_filtering=True,id=9ac52a72-3ea9-4d92-8114-f77f5fe13293,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ac52a72-3e')
Oct 09 16:16:20 compute-0 nova_compute[117331]: 2025-10-09 16:16:20.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:20 compute-0 nova_compute[117331]: 2025-10-09 16:16:20.688 2 DEBUG nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:16:20 compute-0 nova_compute[117331]: 2025-10-09 16:16:20.689 2 DEBUG nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:16:20 compute-0 nova_compute[117331]: 2025-10-09 16:16:20.689 2 DEBUG nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] No VIF found with MAC fa:16:3e:bf:94:75, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:16:20 compute-0 nova_compute[117331]: 2025-10-09 16:16:20.689 2 INFO nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Using config drive
Oct 09 16:16:20 compute-0 sshd-session[143470]: Failed password for invalid user docker from 134.199.199.215 port 60658 ssh2
Oct 09 16:16:20 compute-0 podman[143474]: 2025-10-09 16:16:20.8286032 +0000 UTC m=+0.063574672 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251007)
Oct 09 16:16:21 compute-0 nova_compute[117331]: 2025-10-09 16:16:21.201 2 WARNING neutronclient.v2_0.client [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:16:21 compute-0 nova_compute[117331]: 2025-10-09 16:16:21.596 2 INFO nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Creating config drive at /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk.config
Oct 09 16:16:21 compute-0 nova_compute[117331]: 2025-10-09 16:16:21.601 2 DEBUG oslo_concurrency.processutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpte74ff7_ execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:16:21 compute-0 nova_compute[117331]: 2025-10-09 16:16:21.730 2 DEBUG oslo_concurrency.processutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpte74ff7_" returned: 0 in 0.128s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:16:21 compute-0 kernel: tap9ac52a72-3e: entered promiscuous mode
Oct 09 16:16:21 compute-0 NetworkManager[1028]: <info>  [1760026581.7869] manager: (tap9ac52a72-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Oct 09 16:16:21 compute-0 nova_compute[117331]: 2025-10-09 16:16:21.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:21 compute-0 ovn_controller[19752]: 2025-10-09T16:16:21Z|00080|binding|INFO|Claiming lport 9ac52a72-3ea9-4d92-8114-f77f5fe13293 for this chassis.
Oct 09 16:16:21 compute-0 ovn_controller[19752]: 2025-10-09T16:16:21Z|00081|binding|INFO|9ac52a72-3ea9-4d92-8114-f77f5fe13293: Claiming fa:16:3e:bf:94:75 10.100.0.4
Oct 09 16:16:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:21.796 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:94:75 10.100.0.4'], port_security=['fa:16:3e:bf:94:75 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '12702257-b2eb-4842-bdf8-25e7a3b20038', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be4e2f9059cd48f5b44a612256e3fc7b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '92d94554-4438-4b2d-9e48-eb178205b5a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8754009b-0d8f-4d41-af4c-a82e7b57a4f6, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=9ac52a72-3ea9-4d92-8114-f77f5fe13293) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:16:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:21.798 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 9ac52a72-3ea9-4d92-8114-f77f5fe13293 in datapath 7ed4244a-b510-4df6-9ffd-2f86603932fc bound to our chassis
Oct 09 16:16:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:21.799 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ed4244a-b510-4df6-9ffd-2f86603932fc
Oct 09 16:16:21 compute-0 ovn_controller[19752]: 2025-10-09T16:16:21Z|00082|binding|INFO|Setting lport 9ac52a72-3ea9-4d92-8114-f77f5fe13293 ovn-installed in OVS
Oct 09 16:16:21 compute-0 ovn_controller[19752]: 2025-10-09T16:16:21Z|00083|binding|INFO|Setting lport 9ac52a72-3ea9-4d92-8114-f77f5fe13293 up in Southbound
Oct 09 16:16:21 compute-0 nova_compute[117331]: 2025-10-09 16:16:21.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:21 compute-0 systemd-udevd[143512]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:16:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:21.817 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[c8b51f64-b2cc-4af7-bdc6-a45e12b8041e]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:16:21 compute-0 systemd-machined[77487]: New machine qemu-5-instance-00000008.
Oct 09 16:16:21 compute-0 NetworkManager[1028]: <info>  [1760026581.8289] device (tap9ac52a72-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:16:21 compute-0 NetworkManager[1028]: <info>  [1760026581.8296] device (tap9ac52a72-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:16:21 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000008.
Oct 09 16:16:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:21.847 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[ca289d29-7a7a-44a7-ae0c-87ab89fa5998]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:16:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:21.850 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[d0432a23-6eb5-4383-bc60-e31905d4574c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:16:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:21.883 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e92182-4952-4715-98fd-b7e1eb2dc1ff]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:16:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:21.903 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[1a5273f5-bdb8-464d-92fc-84acaf51d777]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ed4244a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:ef:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 148278, 'reachable_time': 15142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 143527, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:16:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:21.924 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[9cebf380-a1b6-4a30-a952-161042b35635]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ed4244a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 148290, 'tstamp': 148290}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143529, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ed4244a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 148293, 'tstamp': 148293}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143529, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:16:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:21.925 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ed4244a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:16:21 compute-0 nova_compute[117331]: 2025-10-09 16:16:21.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:21.928 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ed4244a-b0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:16:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:21.929 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:16:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:21.929 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ed4244a-b0, col_values=(('external_ids', {'iface-id': 'fa7fd37b-f1db-4203-966c-06eaa6fa3892'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:16:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:21.929 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:16:21 compute-0 nova_compute[117331]: 2025-10-09 16:16:21.936 2 DEBUG nova.compute.manager [req-e3d27f6e-95a7-49d6-8fdc-332b00878a2e req-4effc1d9-2c20-41aa-9e97-5585a8866e8a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received event network-vif-plugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:16:21 compute-0 nova_compute[117331]: 2025-10-09 16:16:21.937 2 DEBUG oslo_concurrency.lockutils [req-e3d27f6e-95a7-49d6-8fdc-332b00878a2e req-4effc1d9-2c20-41aa-9e97-5585a8866e8a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:16:21 compute-0 nova_compute[117331]: 2025-10-09 16:16:21.937 2 DEBUG oslo_concurrency.lockutils [req-e3d27f6e-95a7-49d6-8fdc-332b00878a2e req-4effc1d9-2c20-41aa-9e97-5585a8866e8a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:16:21 compute-0 nova_compute[117331]: 2025-10-09 16:16:21.938 2 DEBUG oslo_concurrency.lockutils [req-e3d27f6e-95a7-49d6-8fdc-332b00878a2e req-4effc1d9-2c20-41aa-9e97-5585a8866e8a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:16:21 compute-0 nova_compute[117331]: 2025-10-09 16:16:21.938 2 DEBUG nova.compute.manager [req-e3d27f6e-95a7-49d6-8fdc-332b00878a2e req-4effc1d9-2c20-41aa-9e97-5585a8866e8a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Processing event network-vif-plugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:16:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:21.940 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[67b037d5-46fa-470e-b87d-4d27b8c8c42d]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-7ed4244a-b510-4df6-9ffd-2f86603932fc\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 7ed4244a-b510-4df6-9ffd-2f86603932fc\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:16:22 compute-0 unix_chkpwd[143530]: password check failed for user (root)
Oct 09 16:16:22 compute-0 sshd-session[143525]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:16:22 compute-0 sshd-session[143470]: Connection closed by invalid user docker 134.199.199.215 port 60658 [preauth]
Oct 09 16:16:22 compute-0 nova_compute[117331]: 2025-10-09 16:16:22.529 2 DEBUG nova.compute.manager [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:16:22 compute-0 nova_compute[117331]: 2025-10-09 16:16:22.533 2 DEBUG nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:16:22 compute-0 nova_compute[117331]: 2025-10-09 16:16:22.537 2 INFO nova.virt.libvirt.driver [-] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Instance spawned successfully.
Oct 09 16:16:22 compute-0 nova_compute[117331]: 2025-10-09 16:16:22.537 2 DEBUG nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:16:23 compute-0 nova_compute[117331]: 2025-10-09 16:16:23.051 2 DEBUG nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:16:23 compute-0 nova_compute[117331]: 2025-10-09 16:16:23.053 2 DEBUG nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:16:23 compute-0 nova_compute[117331]: 2025-10-09 16:16:23.054 2 DEBUG nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:16:23 compute-0 nova_compute[117331]: 2025-10-09 16:16:23.055 2 DEBUG nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:16:23 compute-0 nova_compute[117331]: 2025-10-09 16:16:23.055 2 DEBUG nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:16:23 compute-0 nova_compute[117331]: 2025-10-09 16:16:23.056 2 DEBUG nova.virt.libvirt.driver [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:16:23 compute-0 nova_compute[117331]: 2025-10-09 16:16:23.567 2 INFO nova.compute.manager [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Took 8.30 seconds to spawn the instance on the hypervisor.
Oct 09 16:16:23 compute-0 nova_compute[117331]: 2025-10-09 16:16:23.568 2 DEBUG nova.compute.manager [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:16:23 compute-0 nova_compute[117331]: 2025-10-09 16:16:23.980 2 DEBUG nova.compute.manager [req-b2a56223-66da-4a71-b831-1134a58d3830 req-783e4798-2f03-40bf-8a86-b97376cb8875 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received event network-vif-plugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:16:23 compute-0 nova_compute[117331]: 2025-10-09 16:16:23.981 2 DEBUG oslo_concurrency.lockutils [req-b2a56223-66da-4a71-b831-1134a58d3830 req-783e4798-2f03-40bf-8a86-b97376cb8875 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:16:23 compute-0 nova_compute[117331]: 2025-10-09 16:16:23.981 2 DEBUG oslo_concurrency.lockutils [req-b2a56223-66da-4a71-b831-1134a58d3830 req-783e4798-2f03-40bf-8a86-b97376cb8875 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:16:23 compute-0 nova_compute[117331]: 2025-10-09 16:16:23.982 2 DEBUG oslo_concurrency.lockutils [req-b2a56223-66da-4a71-b831-1134a58d3830 req-783e4798-2f03-40bf-8a86-b97376cb8875 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:16:23 compute-0 nova_compute[117331]: 2025-10-09 16:16:23.982 2 DEBUG nova.compute.manager [req-b2a56223-66da-4a71-b831-1134a58d3830 req-783e4798-2f03-40bf-8a86-b97376cb8875 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] No waiting events found dispatching network-vif-plugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:16:23 compute-0 nova_compute[117331]: 2025-10-09 16:16:23.982 2 WARNING nova.compute.manager [req-b2a56223-66da-4a71-b831-1134a58d3830 req-783e4798-2f03-40bf-8a86-b97376cb8875 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received unexpected event network-vif-plugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 for instance with vm_state active and task_state None.
Oct 09 16:16:24 compute-0 nova_compute[117331]: 2025-10-09 16:16:24.096 2 INFO nova.compute.manager [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Took 13.57 seconds to build instance.
Oct 09 16:16:24 compute-0 nova_compute[117331]: 2025-10-09 16:16:24.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:24 compute-0 nova_compute[117331]: 2025-10-09 16:16:24.604 2 DEBUG oslo_concurrency.lockutils [None req-8a0ba780-e6a1-42f0-a97c-d0963d6a8726 e5044998ddc3419bb14cc08417add581 be4e2f9059cd48f5b44a612256e3fc7b - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.090s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:16:24 compute-0 sshd-session[143525]: Failed password for root from 134.199.199.215 port 60660 ssh2
Oct 09 16:16:25 compute-0 unix_chkpwd[143540]: password check failed for user (root)
Oct 09 16:16:25 compute-0 sshd-session[143538]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:16:25 compute-0 nova_compute[117331]: 2025-10-09 16:16:25.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:25 compute-0 podman[143541]: 2025-10-09 16:16:25.839814889 +0000 UTC m=+0.067727804 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 09 16:16:26 compute-0 sshd-session[143525]: Connection closed by authenticating user root 134.199.199.215 port 60660 [preauth]
Oct 09 16:16:27 compute-0 sshd-session[143538]: Failed password for root from 134.199.199.215 port 60676 ssh2
Oct 09 16:16:28 compute-0 sshd-session[143563]: Invalid user demo from 134.199.199.215 port 49772
Oct 09 16:16:28 compute-0 sshd-session[143563]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:16:28 compute-0 sshd-session[143563]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:16:29 compute-0 nova_compute[117331]: 2025-10-09 16:16:29.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:29 compute-0 sshd-session[143538]: Connection closed by authenticating user root 134.199.199.215 port 60676 [preauth]
Oct 09 16:16:29 compute-0 podman[127775]: time="2025-10-09T16:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:16:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:16:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3486 "" "Go-http-client/1.1"
Oct 09 16:16:29 compute-0 podman[143565]: 2025-10-09 16:16:29.84300333 +0000 UTC m=+0.069687516 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 09 16:16:29 compute-0 podman[143566]: 2025-10-09 16:16:29.844476297 +0000 UTC m=+0.072617749 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_build_tag=watcher_latest)
Oct 09 16:16:30 compute-0 nova_compute[117331]: 2025-10-09 16:16:30.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:30 compute-0 sshd-session[143563]: Failed password for invalid user demo from 134.199.199.215 port 49772 ssh2
Oct 09 16:16:31 compute-0 sshd-session[143563]: Connection closed by invalid user demo 134.199.199.215 port 49772 [preauth]
Oct 09 16:16:31 compute-0 openstack_network_exporter[129925]: ERROR   16:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:16:31 compute-0 openstack_network_exporter[129925]: ERROR   16:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:16:31 compute-0 openstack_network_exporter[129925]: ERROR   16:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:16:31 compute-0 openstack_network_exporter[129925]: ERROR   16:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:16:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:16:31 compute-0 openstack_network_exporter[129925]: ERROR   16:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:16:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:16:31 compute-0 sshd-session[143602]: Invalid user user from 134.199.199.215 port 49778
Oct 09 16:16:31 compute-0 sshd-session[143602]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:16:31 compute-0 sshd-session[143602]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:16:33 compute-0 ovn_controller[19752]: 2025-10-09T16:16:33Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:94:75 10.100.0.4
Oct 09 16:16:33 compute-0 ovn_controller[19752]: 2025-10-09T16:16:33Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:94:75 10.100.0.4
Oct 09 16:16:33 compute-0 sshd-session[143602]: Failed password for invalid user user from 134.199.199.215 port 49778 ssh2
Oct 09 16:16:33 compute-0 sshd-session[143602]: Connection closed by invalid user user 134.199.199.215 port 49778 [preauth]
Oct 09 16:16:34 compute-0 nova_compute[117331]: 2025-10-09 16:16:34.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:35.292 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:16:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:35.293 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:16:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:16:35.293 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:16:35 compute-0 sshd-session[143622]: Invalid user deployer from 134.199.199.215 port 49794
Oct 09 16:16:35 compute-0 sshd-session[143622]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:16:35 compute-0 sshd-session[143622]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:16:35 compute-0 nova_compute[117331]: 2025-10-09 16:16:35.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:38 compute-0 sshd-session[143622]: Failed password for invalid user deployer from 134.199.199.215 port 49794 ssh2
Oct 09 16:16:38 compute-0 unix_chkpwd[143627]: password check failed for user (root)
Oct 09 16:16:38 compute-0 sshd-session[143625]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:16:39 compute-0 sshd-session[143622]: Connection closed by invalid user deployer 134.199.199.215 port 49794 [preauth]
Oct 09 16:16:39 compute-0 nova_compute[117331]: 2025-10-09 16:16:39.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:39 compute-0 podman[143628]: 2025-10-09 16:16:39.823853888 +0000 UTC m=+0.059157981 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, architecture=x86_64, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 09 16:16:40 compute-0 nova_compute[117331]: 2025-10-09 16:16:40.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:40 compute-0 sshd-session[143625]: Failed password for root from 134.199.199.215 port 46264 ssh2
Oct 09 16:16:42 compute-0 unix_chkpwd[143653]: password check failed for user (root)
Oct 09 16:16:42 compute-0 sshd-session[143651]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:16:42 compute-0 podman[143654]: 2025-10-09 16:16:42.859663465 +0000 UTC m=+0.086848562 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 09 16:16:42 compute-0 sshd-session[143625]: Connection closed by authenticating user root 134.199.199.215 port 46264 [preauth]
Oct 09 16:16:43 compute-0 sshd-session[143651]: Failed password for root from 134.199.199.215 port 46276 ssh2
Oct 09 16:16:44 compute-0 sshd-session[143651]: Connection closed by authenticating user root 134.199.199.215 port 46276 [preauth]
Oct 09 16:16:44 compute-0 nova_compute[117331]: 2025-10-09 16:16:44.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:45 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:46282 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:16:45 compute-0 nova_compute[117331]: 2025-10-09 16:16:45.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:48 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:59784 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:16:49 compute-0 nova_compute[117331]: 2025-10-09 16:16:49.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:50 compute-0 nova_compute[117331]: 2025-10-09 16:16:50.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:51 compute-0 podman[143681]: 2025-10-09 16:16:51.852627764 +0000 UTC m=+0.075680967 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.4)
Oct 09 16:16:51 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:59796 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:16:54 compute-0 nova_compute[117331]: 2025-10-09 16:16:54.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:55 compute-0 nova_compute[117331]: 2025-10-09 16:16:55.108 2 DEBUG nova.virt.libvirt.driver [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Check if temp file /var/lib/nova/instances/tmpuajgq18h exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Oct 09 16:16:55 compute-0 nova_compute[117331]: 2025-10-09 16:16:55.115 2 DEBUG nova.compute.manager [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=71680,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpuajgq18h',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='70a451f7-5bde-42f1-a448-d48b3b24d9d6',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Oct 09 16:16:55 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:59802 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:16:55 compute-0 nova_compute[117331]: 2025-10-09 16:16:55.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:56 compute-0 podman[143716]: 2025-10-09 16:16:56.863509212 +0000 UTC m=+0.071853784 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 09 16:16:58 compute-0 nova_compute[117331]: 2025-10-09 16:16:58.561 2 DEBUG oslo_concurrency.lockutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-12702257-b2eb-4842-bdf8-25e7a3b20038" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:16:58 compute-0 nova_compute[117331]: 2025-10-09 16:16:58.561 2 DEBUG oslo_concurrency.lockutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-12702257-b2eb-4842-bdf8-25e7a3b20038" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:16:58 compute-0 nova_compute[117331]: 2025-10-09 16:16:58.561 2 DEBUG nova.network.neutron [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:16:58 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:41470 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:16:59 compute-0 nova_compute[117331]: 2025-10-09 16:16:59.068 2 WARNING neutronclient.v2_0.client [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:16:59 compute-0 nova_compute[117331]: 2025-10-09 16:16:59.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:16:59 compute-0 podman[127775]: time="2025-10-09T16:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:16:59 compute-0 nova_compute[117331]: 2025-10-09 16:16:59.748 2 WARNING neutronclient.v2_0.client [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:16:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:16:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3483 "" "Go-http-client/1.1"
Oct 09 16:16:59 compute-0 nova_compute[117331]: 2025-10-09 16:16:59.885 2 DEBUG oslo_concurrency.processutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:16:59 compute-0 nova_compute[117331]: 2025-10-09 16:16:59.897 2 DEBUG nova.network.neutron [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Updating instance_info_cache with network_info: [{"id": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "address": "fa:16:3e:bf:94:75", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac52a72-3e", "ovs_interfaceid": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:16:59 compute-0 nova_compute[117331]: 2025-10-09 16:16:59.953 2 DEBUG oslo_concurrency.processutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:16:59 compute-0 nova_compute[117331]: 2025-10-09 16:16:59.954 2 DEBUG oslo_concurrency.processutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:17:00 compute-0 nova_compute[117331]: 2025-10-09 16:17:00.037 2 DEBUG oslo_concurrency.processutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:17:00 compute-0 nova_compute[117331]: 2025-10-09 16:17:00.039 2 DEBUG nova.compute.manager [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Preparing to wait for external event network-vif-plugged-a5c38ba6-3efb-4090-a42b-1dd250959041 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:17:00 compute-0 nova_compute[117331]: 2025-10-09 16:17:00.039 2 DEBUG oslo_concurrency.lockutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:00 compute-0 nova_compute[117331]: 2025-10-09 16:17:00.039 2 DEBUG oslo_concurrency.lockutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:00 compute-0 nova_compute[117331]: 2025-10-09 16:17:00.040 2 DEBUG oslo_concurrency.lockutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:00 compute-0 nova_compute[117331]: 2025-10-09 16:17:00.407 2 DEBUG oslo_concurrency.lockutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-12702257-b2eb-4842-bdf8-25e7a3b20038" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:17:00 compute-0 nova_compute[117331]: 2025-10-09 16:17:00.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:00 compute-0 podman[143747]: 2025-10-09 16:17:00.841315848 +0000 UTC m=+0.059494922 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 09 16:17:00 compute-0 podman[143748]: 2025-10-09 16:17:00.864626988 +0000 UTC m=+0.071369239 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid)
Oct 09 16:17:01 compute-0 openstack_network_exporter[129925]: ERROR   16:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:17:01 compute-0 openstack_network_exporter[129925]: ERROR   16:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:17:01 compute-0 openstack_network_exporter[129925]: ERROR   16:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:17:01 compute-0 openstack_network_exporter[129925]: ERROR   16:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:17:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:17:01 compute-0 openstack_network_exporter[129925]: ERROR   16:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:17:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:17:01 compute-0 nova_compute[117331]: 2025-10-09 16:17:01.945 2 DEBUG nova.virt.libvirt.driver [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12417
Oct 09 16:17:01 compute-0 nova_compute[117331]: 2025-10-09 16:17:01.946 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Creating file /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/d39ea067735a43de99763002ea7f94da.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Oct 09 16:17:01 compute-0 nova_compute[117331]: 2025-10-09 16:17:01.946 2 DEBUG oslo_concurrency.processutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/d39ea067735a43de99763002ea7f94da.tmp execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:17:02 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:41484 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:17:02 compute-0 nova_compute[117331]: 2025-10-09 16:17:02.371 2 DEBUG oslo_concurrency.processutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/d39ea067735a43de99763002ea7f94da.tmp" returned: 1 in 0.425s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:17:02 compute-0 nova_compute[117331]: 2025-10-09 16:17:02.372 2 DEBUG oslo_concurrency.processutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/d39ea067735a43de99763002ea7f94da.tmp' failed. Not Retrying. execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:423
Oct 09 16:17:02 compute-0 nova_compute[117331]: 2025-10-09 16:17:02.372 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Creating directory /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038 on remote host 192.168.122.101 create_dir /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Oct 09 16:17:02 compute-0 nova_compute[117331]: 2025-10-09 16:17:02.372 2 DEBUG oslo_concurrency.processutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:17:02 compute-0 nova_compute[117331]: 2025-10-09 16:17:02.595 2 DEBUG oslo_concurrency.processutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038" returned: 0 in 0.223s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:17:02 compute-0 nova_compute[117331]: 2025-10-09 16:17:02.599 2 DEBUG nova.virt.libvirt.driver [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4247
Oct 09 16:17:04 compute-0 nova_compute[117331]: 2025-10-09 16:17:04.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:04 compute-0 kernel: tap9ac52a72-3e (unregistering): left promiscuous mode
Oct 09 16:17:04 compute-0 NetworkManager[1028]: <info>  [1760026624.7577] device (tap9ac52a72-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:17:04 compute-0 nova_compute[117331]: 2025-10-09 16:17:04.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:04 compute-0 ovn_controller[19752]: 2025-10-09T16:17:04Z|00084|binding|INFO|Releasing lport 9ac52a72-3ea9-4d92-8114-f77f5fe13293 from this chassis (sb_readonly=0)
Oct 09 16:17:04 compute-0 ovn_controller[19752]: 2025-10-09T16:17:04Z|00085|binding|INFO|Setting lport 9ac52a72-3ea9-4d92-8114-f77f5fe13293 down in Southbound
Oct 09 16:17:04 compute-0 ovn_controller[19752]: 2025-10-09T16:17:04Z|00086|binding|INFO|Removing iface tap9ac52a72-3e ovn-installed in OVS
Oct 09 16:17:04 compute-0 nova_compute[117331]: 2025-10-09 16:17:04.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:04 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:04.786 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:94:75 10.100.0.4'], port_security=['fa:16:3e:bf:94:75 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '12702257-b2eb-4842-bdf8-25e7a3b20038', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be4e2f9059cd48f5b44a612256e3fc7b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '92d94554-4438-4b2d-9e48-eb178205b5a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8754009b-0d8f-4d41-af4c-a82e7b57a4f6, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=9ac52a72-3ea9-4d92-8114-f77f5fe13293) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:17:04 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:04.787 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 9ac52a72-3ea9-4d92-8114-f77f5fe13293 in datapath 7ed4244a-b510-4df6-9ffd-2f86603932fc unbound from our chassis
Oct 09 16:17:04 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:04.788 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ed4244a-b510-4df6-9ffd-2f86603932fc
Oct 09 16:17:04 compute-0 nova_compute[117331]: 2025-10-09 16:17:04.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:04 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:04.840 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[468e38e7-f0ba-4a14-8774-24c6e8ec9132]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:04 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct 09 16:17:04 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000008.scope: Consumed 13.857s CPU time.
Oct 09 16:17:04 compute-0 systemd-machined[77487]: Machine qemu-5-instance-00000008 terminated.
Oct 09 16:17:04 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:04.869 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[c4b97a48-ee7d-49d9-a265-5d08d6dee1e2]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:04 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:04.872 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[18500843-d242-4b6c-8014-46c4923dfa8a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:04 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:04.902 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[9d4df910-ab67-4ee6-98ad-647887b9e66e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:04 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:04.927 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[639f8cf3-40cb-434d-87df-2f5ef3476767]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ed4244a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:ef:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 148278, 'reachable_time': 15142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 143800, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:04 compute-0 nova_compute[117331]: 2025-10-09 16:17:04.948 2 DEBUG nova.compute.manager [req-ffd0ea7d-c2a6-41bb-ade1-ee1fca462a2e req-6b284cdb-e570-4fa7-932b-d183a205b8b8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received event network-vif-unplugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:17:04 compute-0 nova_compute[117331]: 2025-10-09 16:17:04.948 2 DEBUG oslo_concurrency.lockutils [req-ffd0ea7d-c2a6-41bb-ade1-ee1fca462a2e req-6b284cdb-e570-4fa7-932b-d183a205b8b8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:04 compute-0 nova_compute[117331]: 2025-10-09 16:17:04.948 2 DEBUG oslo_concurrency.lockutils [req-ffd0ea7d-c2a6-41bb-ade1-ee1fca462a2e req-6b284cdb-e570-4fa7-932b-d183a205b8b8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:04 compute-0 nova_compute[117331]: 2025-10-09 16:17:04.949 2 DEBUG oslo_concurrency.lockutils [req-ffd0ea7d-c2a6-41bb-ade1-ee1fca462a2e req-6b284cdb-e570-4fa7-932b-d183a205b8b8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:04 compute-0 nova_compute[117331]: 2025-10-09 16:17:04.949 2 DEBUG nova.compute.manager [req-ffd0ea7d-c2a6-41bb-ade1-ee1fca462a2e req-6b284cdb-e570-4fa7-932b-d183a205b8b8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] No waiting events found dispatching network-vif-unplugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:17:04 compute-0 nova_compute[117331]: 2025-10-09 16:17:04.949 2 WARNING nova.compute.manager [req-ffd0ea7d-c2a6-41bb-ade1-ee1fca462a2e req-6b284cdb-e570-4fa7-932b-d183a205b8b8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received unexpected event network-vif-unplugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 for instance with vm_state active and task_state resize_migrating.
Oct 09 16:17:04 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:04.953 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[6a9a7713-bf2e-4403-a940-babbe3acce1f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ed4244a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 148290, 'tstamp': 148290}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143801, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ed4244a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 148293, 'tstamp': 148293}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143801, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:04 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:04.955 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ed4244a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:17:04 compute-0 nova_compute[117331]: 2025-10-09 16:17:04.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:04 compute-0 nova_compute[117331]: 2025-10-09 16:17:04.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:04 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:04.961 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ed4244a-b0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:17:04 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:04.961 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:17:04 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:04.962 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ed4244a-b0, col_values=(('external_ids', {'iface-id': 'fa7fd37b-f1db-4203-966c-06eaa6fa3892'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:17:04 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:04.962 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:17:04 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:04.964 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe372ba-b2a8-4a16-84da-e385fc8ab350]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-7ed4244a-b510-4df6-9ffd-2f86603932fc\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 7ed4244a-b510-4df6-9ffd-2f86603932fc\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:04 compute-0 nova_compute[117331]: 2025-10-09 16:17:04.969 2 DEBUG nova.compute.manager [req-ee92ddf7-a993-4231-aac3-f9e76ca9b2e5 req-61ea3b18-9b63-48f5-8bbd-109d71fcdba0 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received event network-vif-unplugged-a5c38ba6-3efb-4090-a42b-1dd250959041 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:17:04 compute-0 nova_compute[117331]: 2025-10-09 16:17:04.970 2 DEBUG oslo_concurrency.lockutils [req-ee92ddf7-a993-4231-aac3-f9e76ca9b2e5 req-61ea3b18-9b63-48f5-8bbd-109d71fcdba0 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:04 compute-0 nova_compute[117331]: 2025-10-09 16:17:04.970 2 DEBUG oslo_concurrency.lockutils [req-ee92ddf7-a993-4231-aac3-f9e76ca9b2e5 req-61ea3b18-9b63-48f5-8bbd-109d71fcdba0 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:04 compute-0 nova_compute[117331]: 2025-10-09 16:17:04.970 2 DEBUG oslo_concurrency.lockutils [req-ee92ddf7-a993-4231-aac3-f9e76ca9b2e5 req-61ea3b18-9b63-48f5-8bbd-109d71fcdba0 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:04 compute-0 nova_compute[117331]: 2025-10-09 16:17:04.970 2 DEBUG nova.compute.manager [req-ee92ddf7-a993-4231-aac3-f9e76ca9b2e5 req-61ea3b18-9b63-48f5-8bbd-109d71fcdba0 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] No event matching network-vif-unplugged-a5c38ba6-3efb-4090-a42b-1dd250959041 in dict_keys([('network-vif-plugged', 'a5c38ba6-3efb-4090-a42b-1dd250959041')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Oct 09 16:17:04 compute-0 nova_compute[117331]: 2025-10-09 16:17:04.970 2 DEBUG nova.compute.manager [req-ee92ddf7-a993-4231-aac3-f9e76ca9b2e5 req-61ea3b18-9b63-48f5-8bbd-109d71fcdba0 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received event network-vif-unplugged-a5c38ba6-3efb-4090-a42b-1dd250959041 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:17:05 compute-0 kernel: tap9ac52a72-3e: entered promiscuous mode
Oct 09 16:17:05 compute-0 NetworkManager[1028]: <info>  [1760026625.0406] manager: (tap9ac52a72-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Oct 09 16:17:05 compute-0 systemd-udevd[143791]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:17:05 compute-0 kernel: tap9ac52a72-3e (unregistering): left promiscuous mode
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.045 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.046 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:05 compute-0 ovn_controller[19752]: 2025-10-09T16:17:05Z|00087|if_status|INFO|Not updating pb chassis for 9ac52a72-3ea9-4d92-8114-f77f5fe13293 now as sb is readonly
Oct 09 16:17:05 compute-0 ovn_controller[19752]: 2025-10-09T16:17:05Z|00088|binding|INFO|Claiming lport 9ac52a72-3ea9-4d92-8114-f77f5fe13293 for this chassis.
Oct 09 16:17:05 compute-0 ovn_controller[19752]: 2025-10-09T16:17:05Z|00089|binding|INFO|9ac52a72-3ea9-4d92-8114-f77f5fe13293: Claiming fa:16:3e:bf:94:75 10.100.0.4
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.055 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:94:75 10.100.0.4'], port_security=['fa:16:3e:bf:94:75 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '12702257-b2eb-4842-bdf8-25e7a3b20038', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be4e2f9059cd48f5b44a612256e3fc7b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '92d94554-4438-4b2d-9e48-eb178205b5a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8754009b-0d8f-4d41-af4c-a82e7b57a4f6, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=9ac52a72-3ea9-4d92-8114-f77f5fe13293) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.056 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 9ac52a72-3ea9-4d92-8114-f77f5fe13293 in datapath 7ed4244a-b510-4df6-9ffd-2f86603932fc bound to our chassis
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.057 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ed4244a-b510-4df6-9ffd-2f86603932fc
Oct 09 16:17:05 compute-0 ovn_controller[19752]: 2025-10-09T16:17:05Z|00090|binding|INFO|Setting lport 9ac52a72-3ea9-4d92-8114-f77f5fe13293 ovn-installed in OVS
Oct 09 16:17:05 compute-0 ovn_controller[19752]: 2025-10-09T16:17:05Z|00091|binding|INFO|Setting lport 9ac52a72-3ea9-4d92-8114-f77f5fe13293 up in Southbound
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.080 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[e8aac48d-6740-41bb-8383-d1904ff611cf]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:05 compute-0 ovn_controller[19752]: 2025-10-09T16:17:05Z|00092|binding|INFO|Releasing lport 9ac52a72-3ea9-4d92-8114-f77f5fe13293 from this chassis (sb_readonly=1)
Oct 09 16:17:05 compute-0 ovn_controller[19752]: 2025-10-09T16:17:05Z|00093|binding|INFO|Removing iface tap9ac52a72-3e ovn-installed in OVS
Oct 09 16:17:05 compute-0 ovn_controller[19752]: 2025-10-09T16:17:05Z|00094|if_status|INFO|Dropped 2 log messages in last 252 seconds (most recently, 252 seconds ago) due to excessive rate
Oct 09 16:17:05 compute-0 ovn_controller[19752]: 2025-10-09T16:17:05Z|00095|if_status|INFO|Not setting lport 9ac52a72-3ea9-4d92-8114-f77f5fe13293 down as sb is readonly
Oct 09 16:17:05 compute-0 ovn_controller[19752]: 2025-10-09T16:17:05Z|00096|binding|INFO|Releasing lport 9ac52a72-3ea9-4d92-8114-f77f5fe13293 from this chassis (sb_readonly=0)
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:05 compute-0 ovn_controller[19752]: 2025-10-09T16:17:05Z|00097|binding|INFO|Setting lport 9ac52a72-3ea9-4d92-8114-f77f5fe13293 down in Southbound
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.089 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:94:75 10.100.0.4'], port_security=['fa:16:3e:bf:94:75 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '12702257-b2eb-4842-bdf8-25e7a3b20038', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be4e2f9059cd48f5b44a612256e3fc7b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '92d94554-4438-4b2d-9e48-eb178205b5a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8754009b-0d8f-4d41-af4c-a82e7b57a4f6, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=9ac52a72-3ea9-4d92-8114-f77f5fe13293) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.116 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[650a04b6-ed7a-43d8-a2af-1e4551d06510]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.119 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[6e515596-6522-4c7e-9b69-4b77b8cf2246]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.152 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4c6816-b5e7-410a-abe7-3832d74c5227]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.173 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[88dc9c37-1903-487a-b172-73284b7d2d8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ed4244a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:ef:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 148278, 'reachable_time': 15142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 143819, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.192 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[0688cc24-7892-4019-807a-f77de7b65953]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ed4244a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 148290, 'tstamp': 148290}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143820, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ed4244a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 148293, 'tstamp': 148293}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143820, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.194 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ed4244a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.201 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ed4244a-b0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.201 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.201 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ed4244a-b0, col_values=(('external_ids', {'iface-id': 'fa7fd37b-f1db-4203-966c-06eaa6fa3892'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.201 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.203 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[7d35039e-33a4-47d2-a768-004a371112a3]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-7ed4244a-b510-4df6-9ffd-2f86603932fc\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 7ed4244a-b510-4df6-9ffd-2f86603932fc\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.204 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 9ac52a72-3ea9-4d92-8114-f77f5fe13293 in datapath 7ed4244a-b510-4df6-9ffd-2f86603932fc unbound from our chassis
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.206 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ed4244a-b510-4df6-9ffd-2f86603932fc
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.223 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[cecade33-3097-4e7d-889e-f3490a0c92c3]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.252 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[e08043de-372f-4bf7-b976-a298b3541f36]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.257 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[326686db-5c94-4d7a-b651-39f0498a099d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.290 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[e660747b-e257-4866-a5c2-2eafc1a08530]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.308 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[0971ca09-7ca4-495b-a41f-247ac366b220]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ed4244a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:ef:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 148278, 'reachable_time': 15142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 143829, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.330 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3c853c14-6621-472d-bae2-a5bb435a82b7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ed4244a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 148290, 'tstamp': 148290}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143830, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ed4244a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 148293, 'tstamp': 148293}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143830, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.332 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ed4244a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.339 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ed4244a-b0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.339 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.340 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ed4244a-b0, col_values=(('external_ids', {'iface-id': 'fa7fd37b-f1db-4203-966c-06eaa6fa3892'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.340 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:17:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:05.342 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[c05dddb7-2192-4628-9d51-90ae5261aa01]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-7ed4244a-b510-4df6-9ffd-2f86603932fc\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 7ed4244a-b510-4df6-9ffd-2f86603932fc\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:05 compute-0 sshd-session[143825]: Invalid user zabbix from 134.199.199.215 port 41492
Oct 09 16:17:05 compute-0 sshd-session[143825]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:17:05 compute-0 sshd-session[143825]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.619 2 INFO nova.virt.libvirt.driver [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Instance shutdown successfully after 3 seconds.
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.626 2 INFO nova.virt.libvirt.driver [-] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Instance destroyed successfully.
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.629 2 DEBUG nova.virt.libvirt.vif [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:16:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-925081779',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-925081779',id=8,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:16:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='be4e2f9059cd48f5b44a612256e3fc7b',ramdisk_id='',reservation_id='r-zkb46622',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestExecuteActionsViaActuator-1347788182',owner_user_name='tempest-TestExecuteActionsViaActuator-1347788182-project-admin'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:16:52Z,user_data=None,user_id='e5044998ddc3419bb14cc08417add581',uuid=12702257-b2eb-4842-bdf8-25e7a3b20038,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "address": "fa:16:3e:bf:94:75", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "vif_mac": "fa:16:3e:bf:94:75"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac52a72-3e", "ovs_interfaceid": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.629 2 DEBUG nova.network.os_vif_util [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "address": "fa:16:3e:bf:94:75", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "vif_mac": "fa:16:3e:bf:94:75"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac52a72-3e", "ovs_interfaceid": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.631 2 DEBUG nova.network.os_vif_util [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bf:94:75,bridge_name='br-int',has_traffic_filtering=True,id=9ac52a72-3ea9-4d92-8114-f77f5fe13293,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ac52a72-3e') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.632 2 DEBUG os_vif [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:94:75,bridge_name='br-int',has_traffic_filtering=True,id=9ac52a72-3ea9-4d92-8114-f77f5fe13293,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ac52a72-3e') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.637 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ac52a72-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=a8377760-b1e8-4764-be80-edf13f48a379) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.648 2 INFO os_vif [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:94:75,bridge_name='br-int',has_traffic_filtering=True,id=9ac52a72-3ea9-4d92-8114-f77f5fe13293,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ac52a72-3e')
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.652 2 DEBUG oslo_concurrency.processutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.718 2 DEBUG oslo_concurrency.processutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.719 2 DEBUG oslo_concurrency.processutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.778 2 DEBUG oslo_concurrency.processutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.780 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Copying file /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038_resize/disk to 192.168.122.101:/var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk copy_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/remotefs.py:103
Oct 09 16:17:05 compute-0 nova_compute[117331]: 2025-10-09 16:17:05.780 2 DEBUG oslo_concurrency.processutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): scp -r /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038_resize/disk 192.168.122.101:/var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:17:06 compute-0 nova_compute[117331]: 2025-10-09 16:17:06.060 2 INFO nova.compute.manager [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Took 6.02 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Oct 09 16:17:06 compute-0 nova_compute[117331]: 2025-10-09 16:17:06.377 2 DEBUG oslo_concurrency.processutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "scp -r /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038_resize/disk 192.168.122.101:/var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk" returned: 0 in 0.596s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:17:06 compute-0 nova_compute[117331]: 2025-10-09 16:17:06.378 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Copying file /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038_resize/disk.config to 192.168.122.101:/var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk.config copy_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/remotefs.py:103
Oct 09 16:17:06 compute-0 nova_compute[117331]: 2025-10-09 16:17:06.378 2 DEBUG oslo_concurrency.processutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): scp -C -r /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038_resize/disk.config 192.168.122.101:/var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk.config execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:17:06 compute-0 nova_compute[117331]: 2025-10-09 16:17:06.615 2 DEBUG oslo_concurrency.processutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "scp -C -r /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038_resize/disk.config 192.168.122.101:/var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk.config" returned: 0 in 0.237s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:17:06 compute-0 nova_compute[117331]: 2025-10-09 16:17:06.616 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Copying file /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038_resize/disk.info to 192.168.122.101:/var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk.info copy_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/remotefs.py:103
Oct 09 16:17:06 compute-0 nova_compute[117331]: 2025-10-09 16:17:06.616 2 DEBUG oslo_concurrency.processutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): scp -C -r /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038_resize/disk.info 192.168.122.101:/var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk.info execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:17:06 compute-0 nova_compute[117331]: 2025-10-09 16:17:06.827 2 DEBUG oslo_concurrency.processutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "scp -C -r /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038_resize/disk.info 192.168.122.101:/var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk.info" returned: 0 in 0.211s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:17:06 compute-0 nova_compute[117331]: 2025-10-09 16:17:06.829 2 WARNING neutronclient.v2_0.client [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:17:06 compute-0 nova_compute[117331]: 2025-10-09 16:17:06.829 2 WARNING neutronclient.v2_0.client [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:17:06 compute-0 nova_compute[117331]: 2025-10-09 16:17:06.931 2 DEBUG neutronclient.v2_0.client [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 9ac52a72-3ea9-4d92-8114-f77f5fe13293 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.12/site-packages/neutronclient/v2_0/client.py:265
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.015 2 DEBUG nova.compute.manager [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received event network-vif-unplugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.015 2 DEBUG oslo_concurrency.lockutils [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.015 2 DEBUG oslo_concurrency.lockutils [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.016 2 DEBUG oslo_concurrency.lockutils [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.016 2 DEBUG nova.compute.manager [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] No waiting events found dispatching network-vif-unplugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.016 2 WARNING nova.compute.manager [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received unexpected event network-vif-unplugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 for instance with vm_state active and task_state resize_migrating.
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.016 2 DEBUG nova.compute.manager [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received event network-vif-plugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.016 2 DEBUG oslo_concurrency.lockutils [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.016 2 DEBUG oslo_concurrency.lockutils [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.017 2 DEBUG oslo_concurrency.lockutils [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.017 2 DEBUG nova.compute.manager [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] No waiting events found dispatching network-vif-plugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.017 2 WARNING nova.compute.manager [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received unexpected event network-vif-plugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 for instance with vm_state active and task_state resize_migrating.
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.017 2 DEBUG nova.compute.manager [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received event network-vif-plugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.017 2 DEBUG oslo_concurrency.lockutils [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.017 2 DEBUG oslo_concurrency.lockutils [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.018 2 DEBUG oslo_concurrency.lockutils [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.018 2 DEBUG nova.compute.manager [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] No waiting events found dispatching network-vif-plugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.018 2 WARNING nova.compute.manager [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received unexpected event network-vif-plugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 for instance with vm_state active and task_state resize_migrating.
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.018 2 DEBUG nova.compute.manager [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received event network-vif-unplugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.018 2 DEBUG oslo_concurrency.lockutils [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.018 2 DEBUG oslo_concurrency.lockutils [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.019 2 DEBUG oslo_concurrency.lockutils [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.019 2 DEBUG nova.compute.manager [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] No waiting events found dispatching network-vif-unplugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.019 2 WARNING nova.compute.manager [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received unexpected event network-vif-unplugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 for instance with vm_state active and task_state resize_migrating.
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.019 2 DEBUG nova.compute.manager [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received event network-vif-unplugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.019 2 DEBUG oslo_concurrency.lockutils [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.019 2 DEBUG oslo_concurrency.lockutils [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.020 2 DEBUG oslo_concurrency.lockutils [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.020 2 DEBUG nova.compute.manager [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] No waiting events found dispatching network-vif-unplugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.020 2 WARNING nova.compute.manager [req-39375e07-65e3-47ca-80d3-a0602184f61a req-b76b4023-133b-415f-b576-ca9f8b50b265 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received unexpected event network-vif-unplugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 for instance with vm_state active and task_state resize_migrating.
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.034 2 DEBUG nova.compute.manager [req-3987886e-72f0-4380-983a-93ad38bb64f8 req-06434f53-898f-4f26-aa08-f06c4eb58127 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received event network-vif-plugged-a5c38ba6-3efb-4090-a42b-1dd250959041 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.034 2 DEBUG oslo_concurrency.lockutils [req-3987886e-72f0-4380-983a-93ad38bb64f8 req-06434f53-898f-4f26-aa08-f06c4eb58127 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.034 2 DEBUG oslo_concurrency.lockutils [req-3987886e-72f0-4380-983a-93ad38bb64f8 req-06434f53-898f-4f26-aa08-f06c4eb58127 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.035 2 DEBUG oslo_concurrency.lockutils [req-3987886e-72f0-4380-983a-93ad38bb64f8 req-06434f53-898f-4f26-aa08-f06c4eb58127 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.035 2 DEBUG nova.compute.manager [req-3987886e-72f0-4380-983a-93ad38bb64f8 req-06434f53-898f-4f26-aa08-f06c4eb58127 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Processing event network-vif-plugged-a5c38ba6-3efb-4090-a42b-1dd250959041 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.035 2 DEBUG nova.compute.manager [req-3987886e-72f0-4380-983a-93ad38bb64f8 req-06434f53-898f-4f26-aa08-f06c4eb58127 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received event network-changed-a5c38ba6-3efb-4090-a42b-1dd250959041 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.035 2 DEBUG nova.compute.manager [req-3987886e-72f0-4380-983a-93ad38bb64f8 req-06434f53-898f-4f26-aa08-f06c4eb58127 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Refreshing instance network info cache due to event network-changed-a5c38ba6-3efb-4090-a42b-1dd250959041. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.035 2 DEBUG oslo_concurrency.lockutils [req-3987886e-72f0-4380-983a-93ad38bb64f8 req-06434f53-898f-4f26-aa08-f06c4eb58127 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-70a451f7-5bde-42f1-a448-d48b3b24d9d6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.035 2 DEBUG oslo_concurrency.lockutils [req-3987886e-72f0-4380-983a-93ad38bb64f8 req-06434f53-898f-4f26-aa08-f06c4eb58127 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-70a451f7-5bde-42f1-a448-d48b3b24d9d6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.035 2 DEBUG nova.network.neutron [req-3987886e-72f0-4380-983a-93ad38bb64f8 req-06434f53-898f-4f26-aa08-f06c4eb58127 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Refreshing network info cache for port a5c38ba6-3efb-4090-a42b-1dd250959041 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.037 2 DEBUG nova.compute.manager [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:17:07 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:07.047 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.542 2 DEBUG nova.compute.manager [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=71680,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpuajgq18h',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='70a451f7-5bde-42f1-a448-d48b3b24d9d6',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(2df1389d-e874-4268-9b0c-0fd9d5f9e283),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.545 2 WARNING neutronclient.v2_0.client [req-3987886e-72f0-4380-983a-93ad38bb64f8 req-06434f53-898f-4f26-aa08-f06c4eb58127 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:17:07 compute-0 sshd-session[143825]: Failed password for invalid user zabbix from 134.199.199.215 port 41492 ssh2
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.888 2 WARNING neutronclient.v2_0.client [req-3987886e-72f0-4380-983a-93ad38bb64f8 req-06434f53-898f-4f26-aa08-f06c4eb58127 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.967 2 DEBUG oslo_concurrency.lockutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.968 2 DEBUG oslo_concurrency.lockutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:07 compute-0 nova_compute[117331]: 2025-10-09 16:17:07.968 2 DEBUG oslo_concurrency.lockutils [None req-54e55369-5112-43e0-ada4-044cf39dde64 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:08 compute-0 nova_compute[117331]: 2025-10-09 16:17:08.057 2 DEBUG nova.network.neutron [req-3987886e-72f0-4380-983a-93ad38bb64f8 req-06434f53-898f-4f26-aa08-f06c4eb58127 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Updated VIF entry in instance network info cache for port a5c38ba6-3efb-4090-a42b-1dd250959041. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Oct 09 16:17:08 compute-0 nova_compute[117331]: 2025-10-09 16:17:08.057 2 DEBUG nova.network.neutron [req-3987886e-72f0-4380-983a-93ad38bb64f8 req-06434f53-898f-4f26-aa08-f06c4eb58127 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Updating instance_info_cache with network_info: [{"id": "a5c38ba6-3efb-4090-a42b-1dd250959041", "address": "fa:16:3e:c9:bd:65", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c38ba6-3e", "ovs_interfaceid": "a5c38ba6-3efb-4090-a42b-1dd250959041", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:17:08 compute-0 nova_compute[117331]: 2025-10-09 16:17:08.058 2 DEBUG nova.objects.instance [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'migration_context' on Instance uuid 70a451f7-5bde-42f1-a448-d48b3b24d9d6 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:17:08 compute-0 nova_compute[117331]: 2025-10-09 16:17:08.059 2 DEBUG nova.virt.libvirt.driver [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Oct 09 16:17:08 compute-0 nova_compute[117331]: 2025-10-09 16:17:08.061 2 DEBUG nova.virt.libvirt.driver [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:17:08 compute-0 nova_compute[117331]: 2025-10-09 16:17:08.061 2 DEBUG nova.virt.libvirt.driver [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:17:08 compute-0 nova_compute[117331]: 2025-10-09 16:17:08.564 2 DEBUG nova.virt.libvirt.driver [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:17:08 compute-0 nova_compute[117331]: 2025-10-09 16:17:08.564 2 DEBUG nova.virt.libvirt.driver [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:17:08 compute-0 nova_compute[117331]: 2025-10-09 16:17:08.565 2 DEBUG oslo_concurrency.lockutils [req-3987886e-72f0-4380-983a-93ad38bb64f8 req-06434f53-898f-4f26-aa08-f06c4eb58127 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-70a451f7-5bde-42f1-a448-d48b3b24d9d6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:17:08 compute-0 nova_compute[117331]: 2025-10-09 16:17:08.571 2 DEBUG nova.virt.libvirt.vif [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:15:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1017152878',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1017152878',id=6,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:15:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='be4e2f9059cd48f5b44a612256e3fc7b',ramdisk_id='',reservation_id='r-lo5hd093',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteActionsViaActuator-1347788182',owner_user_name='tempest-TestExecuteActionsViaActuator-1347788182-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:15:41Z,user_data=None,user_id='e5044998ddc3419bb14cc08417add581',uuid=70a451f7-5bde-42f1-a448-d48b3b24d9d6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a5c38ba6-3efb-4090-a42b-1dd250959041", "address": "fa:16:3e:c9:bd:65", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapa5c38ba6-3e", "ovs_interfaceid": "a5c38ba6-3efb-4090-a42b-1dd250959041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:17:08 compute-0 nova_compute[117331]: 2025-10-09 16:17:08.571 2 DEBUG nova.network.os_vif_util [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "a5c38ba6-3efb-4090-a42b-1dd250959041", "address": "fa:16:3e:c9:bd:65", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapa5c38ba6-3e", "ovs_interfaceid": "a5c38ba6-3efb-4090-a42b-1dd250959041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:17:08 compute-0 nova_compute[117331]: 2025-10-09 16:17:08.572 2 DEBUG nova.network.os_vif_util [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:bd:65,bridge_name='br-int',has_traffic_filtering=True,id=a5c38ba6-3efb-4090-a42b-1dd250959041,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c38ba6-3e') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:17:08 compute-0 nova_compute[117331]: 2025-10-09 16:17:08.573 2 DEBUG nova.virt.libvirt.migration [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Updating guest XML with vif config: <interface type="ethernet">
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <mac address="fa:16:3e:c9:bd:65"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <model type="virtio"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <mtu size="1442"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <target dev="tapa5c38ba6-3e"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]: </interface>
Oct 09 16:17:08 compute-0 nova_compute[117331]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Oct 09 16:17:08 compute-0 nova_compute[117331]: 2025-10-09 16:17:08.573 2 DEBUG nova.virt.libvirt.migration [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <name>instance-00000006</name>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <uuid>70a451f7-5bde-42f1-a448-d48b3b24d9d6</uuid>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteActionsViaActuator-server-1017152878</nova:name>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:15:36</nova:creationTime>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:17:08 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:17:08 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:user uuid="e5044998ddc3419bb14cc08417add581">tempest-TestExecuteActionsViaActuator-1347788182-project-admin</nova:user>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:project uuid="be4e2f9059cd48f5b44a612256e3fc7b">tempest-TestExecuteActionsViaActuator-1347788182</nova:project>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:port uuid="a5c38ba6-3efb-4090-a42b-1dd250959041">
Oct 09 16:17:08 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <system>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <entry name="serial">70a451f7-5bde-42f1-a448-d48b3b24d9d6</entry>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <entry name="uuid">70a451f7-5bde-42f1-a448-d48b3b24d9d6</entry>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </system>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <os>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </os>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <features>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </features>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk.config"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:c9:bd:65"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa5c38ba6-3e"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/console.log" append="off"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       </target>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/console.log" append="off"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </console>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </input>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <video>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </video>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]: </domain>
Oct 09 16:17:08 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Oct 09 16:17:08 compute-0 nova_compute[117331]: 2025-10-09 16:17:08.575 2 DEBUG nova.virt.libvirt.migration [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <name>instance-00000006</name>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <uuid>70a451f7-5bde-42f1-a448-d48b3b24d9d6</uuid>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteActionsViaActuator-server-1017152878</nova:name>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:15:36</nova:creationTime>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:17:08 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:17:08 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:user uuid="e5044998ddc3419bb14cc08417add581">tempest-TestExecuteActionsViaActuator-1347788182-project-admin</nova:user>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:project uuid="be4e2f9059cd48f5b44a612256e3fc7b">tempest-TestExecuteActionsViaActuator-1347788182</nova:project>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:port uuid="a5c38ba6-3efb-4090-a42b-1dd250959041">
Oct 09 16:17:08 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <system>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <entry name="serial">70a451f7-5bde-42f1-a448-d48b3b24d9d6</entry>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <entry name="uuid">70a451f7-5bde-42f1-a448-d48b3b24d9d6</entry>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </system>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <os>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </os>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <features>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </features>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk.config"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:c9:bd:65"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa5c38ba6-3e"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/console.log" append="off"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       </target>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/console.log" append="off"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </console>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </input>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <video>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </video>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]: </domain>
Oct 09 16:17:08 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Oct 09 16:17:08 compute-0 nova_compute[117331]: 2025-10-09 16:17:08.576 2 DEBUG nova.virt.libvirt.migration [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _update_pci_xml output xml=<domain type="kvm">
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <name>instance-00000006</name>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <uuid>70a451f7-5bde-42f1-a448-d48b3b24d9d6</uuid>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteActionsViaActuator-server-1017152878</nova:name>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:15:36</nova:creationTime>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:17:08 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:17:08 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:user uuid="e5044998ddc3419bb14cc08417add581">tempest-TestExecuteActionsViaActuator-1347788182-project-admin</nova:user>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:project uuid="be4e2f9059cd48f5b44a612256e3fc7b">tempest-TestExecuteActionsViaActuator-1347788182</nova:project>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <nova:port uuid="a5c38ba6-3efb-4090-a42b-1dd250959041">
Oct 09 16:17:08 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <system>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <entry name="serial">70a451f7-5bde-42f1-a448-d48b3b24d9d6</entry>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <entry name="uuid">70a451f7-5bde-42f1-a448-d48b3b24d9d6</entry>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </system>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <os>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </os>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <features>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </features>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/disk.config"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:c9:bd:65"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa5c38ba6-3e"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/console.log" append="off"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:17:08 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       </target>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6/console.log" append="off"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </console>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </input>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <video>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </video>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:17:08 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:17:08 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:17:08 compute-0 nova_compute[117331]: </domain>
Oct 09 16:17:08 compute-0 nova_compute[117331]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Oct 09 16:17:08 compute-0 nova_compute[117331]: 2025-10-09 16:17:08.577 2 DEBUG nova.virt.libvirt.driver [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Oct 09 16:17:08 compute-0 unix_chkpwd[143846]: password check failed for user (root)
Oct 09 16:17:08 compute-0 sshd-session[143844]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:17:09 compute-0 nova_compute[117331]: 2025-10-09 16:17:09.067 2 DEBUG nova.virt.libvirt.migration [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Current None elapsed 1 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:17:09 compute-0 nova_compute[117331]: 2025-10-09 16:17:09.068 2 INFO nova.virt.libvirt.migration [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Increasing downtime to 50 ms after 0 sec elapsed time
Oct 09 16:17:09 compute-0 nova_compute[117331]: 2025-10-09 16:17:09.514 2 DEBUG nova.compute.manager [req-be30f6b7-67e0-453e-811a-95ef11e3cce4 req-9ad8f471-afdf-441b-86a0-8d802df048db ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received event network-changed-9ac52a72-3ea9-4d92-8114-f77f5fe13293 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:17:09 compute-0 nova_compute[117331]: 2025-10-09 16:17:09.514 2 DEBUG nova.compute.manager [req-be30f6b7-67e0-453e-811a-95ef11e3cce4 req-9ad8f471-afdf-441b-86a0-8d802df048db ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Refreshing instance network info cache due to event network-changed-9ac52a72-3ea9-4d92-8114-f77f5fe13293. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:17:09 compute-0 nova_compute[117331]: 2025-10-09 16:17:09.514 2 DEBUG oslo_concurrency.lockutils [req-be30f6b7-67e0-453e-811a-95ef11e3cce4 req-9ad8f471-afdf-441b-86a0-8d802df048db ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-12702257-b2eb-4842-bdf8-25e7a3b20038" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:17:09 compute-0 nova_compute[117331]: 2025-10-09 16:17:09.514 2 DEBUG oslo_concurrency.lockutils [req-be30f6b7-67e0-453e-811a-95ef11e3cce4 req-9ad8f471-afdf-441b-86a0-8d802df048db ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-12702257-b2eb-4842-bdf8-25e7a3b20038" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:17:09 compute-0 nova_compute[117331]: 2025-10-09 16:17:09.515 2 DEBUG nova.network.neutron [req-be30f6b7-67e0-453e-811a-95ef11e3cce4 req-9ad8f471-afdf-441b-86a0-8d802df048db ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Refreshing network info cache for port 9ac52a72-3ea9-4d92-8114-f77f5fe13293 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:17:09 compute-0 nova_compute[117331]: 2025-10-09 16:17:09.810 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:17:09 compute-0 sshd-session[143825]: Connection closed by invalid user zabbix 134.199.199.215 port 41492 [preauth]
Oct 09 16:17:10 compute-0 nova_compute[117331]: 2025-10-09 16:17:10.023 2 WARNING neutronclient.v2_0.client [req-be30f6b7-67e0-453e-811a-95ef11e3cce4 req-9ad8f471-afdf-441b-86a0-8d802df048db ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:17:10 compute-0 nova_compute[117331]: 2025-10-09 16:17:10.087 2 INFO nova.virt.libvirt.driver [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Oct 09 16:17:10 compute-0 nova_compute[117331]: 2025-10-09 16:17:10.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:17:10 compute-0 nova_compute[117331]: 2025-10-09 16:17:10.306 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 09 16:17:10 compute-0 nova_compute[117331]: 2025-10-09 16:17:10.591 2 DEBUG nova.virt.libvirt.migration [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Current 50 elapsed 2 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:17:10 compute-0 nova_compute[117331]: 2025-10-09 16:17:10.592 2 DEBUG nova.virt.libvirt.migration [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Oct 09 16:17:10 compute-0 nova_compute[117331]: 2025-10-09 16:17:10.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:10 compute-0 nova_compute[117331]: 2025-10-09 16:17:10.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:10 compute-0 podman[143854]: 2025-10-09 16:17:10.843452164 +0000 UTC m=+0.071976589 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Oct 09 16:17:10 compute-0 kernel: tapa5c38ba6-3e (unregistering): left promiscuous mode
Oct 09 16:17:10 compute-0 NetworkManager[1028]: <info>  [1760026630.9448] device (tapa5c38ba6-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:17:10 compute-0 nova_compute[117331]: 2025-10-09 16:17:10.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:10 compute-0 ovn_controller[19752]: 2025-10-09T16:17:10Z|00098|binding|INFO|Releasing lport a5c38ba6-3efb-4090-a42b-1dd250959041 from this chassis (sb_readonly=0)
Oct 09 16:17:10 compute-0 ovn_controller[19752]: 2025-10-09T16:17:10Z|00099|binding|INFO|Setting lport a5c38ba6-3efb-4090-a42b-1dd250959041 down in Southbound
Oct 09 16:17:10 compute-0 ovn_controller[19752]: 2025-10-09T16:17:10Z|00100|binding|INFO|Removing iface tapa5c38ba6-3e ovn-installed in OVS
Oct 09 16:17:10 compute-0 nova_compute[117331]: 2025-10-09 16:17:10.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:10 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:10.965 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:bd:65 10.100.0.6'], port_security=['fa:16:3e:c9:bd:65 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '2bd8bf21-1f6b-42c9-9656-9a72fa8dcbf3'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '70a451f7-5bde-42f1-a448-d48b3b24d9d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be4e2f9059cd48f5b44a612256e3fc7b', 'neutron:revision_number': '10', 'neutron:security_group_ids': '92d94554-4438-4b2d-9e48-eb178205b5a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8754009b-0d8f-4d41-af4c-a82e7b57a4f6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=a5c38ba6-3efb-4090-a42b-1dd250959041) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:17:10 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:10.966 28613 INFO neutron.agent.ovn.metadata.agent [-] Port a5c38ba6-3efb-4090-a42b-1dd250959041 in datapath 7ed4244a-b510-4df6-9ffd-2f86603932fc unbound from our chassis
Oct 09 16:17:10 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:10.967 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7ed4244a-b510-4df6-9ffd-2f86603932fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:17:10 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:10.968 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[48e7cf35-0831-4763-a8c0-3651dc9e0ab6]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:10 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:10.969 28613 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc namespace which is not needed anymore
Oct 09 16:17:10 compute-0 nova_compute[117331]: 2025-10-09 16:17:10.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:11 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 09 16:17:11 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Consumed 16.100s CPU time.
Oct 09 16:17:11 compute-0 systemd-machined[77487]: Machine qemu-4-instance-00000006 terminated.
Oct 09 16:17:11 compute-0 neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc[143245]: [NOTICE]   (143249) : haproxy version is 3.0.5-8e879a5
Oct 09 16:17:11 compute-0 neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc[143245]: [NOTICE]   (143249) : path to executable is /usr/sbin/haproxy
Oct 09 16:17:11 compute-0 neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc[143245]: [WARNING]  (143249) : Exiting Master process...
Oct 09 16:17:11 compute-0 podman[143901]: 2025-10-09 16:17:11.081301973 +0000 UTC m=+0.028317381 container kill a06e658ab15bf037bd9c283806f8b85dfc8e0f2a11eb32e297b405fb3a83fde5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2)
Oct 09 16:17:11 compute-0 neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc[143245]: [ALERT]    (143249) : Current worker (143251) exited with code 143 (Terminated)
Oct 09 16:17:11 compute-0 neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc[143245]: [WARNING]  (143249) : All workers exited. Exiting... (0)
Oct 09 16:17:11 compute-0 systemd[1]: libpod-a06e658ab15bf037bd9c283806f8b85dfc8e0f2a11eb32e297b405fb3a83fde5.scope: Deactivated successfully.
Oct 09 16:17:11 compute-0 conmon[143245]: conmon a06e658ab15bf037bd9c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a06e658ab15bf037bd9c283806f8b85dfc8e0f2a11eb32e297b405fb3a83fde5.scope/container/memory.events
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.129 2 DEBUG nova.compute.manager [req-17237f8c-ef1f-4acd-bbc0-aecd0fb43645 req-bd80652d-e20d-4e3c-84cc-dd13b5e5b344 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received event network-vif-unplugged-a5c38ba6-3efb-4090-a42b-1dd250959041 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.129 2 DEBUG oslo_concurrency.lockutils [req-17237f8c-ef1f-4acd-bbc0-aecd0fb43645 req-bd80652d-e20d-4e3c-84cc-dd13b5e5b344 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.129 2 DEBUG oslo_concurrency.lockutils [req-17237f8c-ef1f-4acd-bbc0-aecd0fb43645 req-bd80652d-e20d-4e3c-84cc-dd13b5e5b344 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.130 2 DEBUG oslo_concurrency.lockutils [req-17237f8c-ef1f-4acd-bbc0-aecd0fb43645 req-bd80652d-e20d-4e3c-84cc-dd13b5e5b344 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.130 2 DEBUG nova.compute.manager [req-17237f8c-ef1f-4acd-bbc0-aecd0fb43645 req-bd80652d-e20d-4e3c-84cc-dd13b5e5b344 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] No waiting events found dispatching network-vif-unplugged-a5c38ba6-3efb-4090-a42b-1dd250959041 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.130 2 DEBUG nova.compute.manager [req-17237f8c-ef1f-4acd-bbc0-aecd0fb43645 req-bd80652d-e20d-4e3c-84cc-dd13b5e5b344 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received event network-vif-unplugged-a5c38ba6-3efb-4090-a42b-1dd250959041 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:17:11 compute-0 podman[143917]: 2025-10-09 16:17:11.128600296 +0000 UTC m=+0.023534439 container died a06e658ab15bf037bd9c283806f8b85dfc8e0f2a11eb32e297b405fb3a83fde5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a06e658ab15bf037bd9c283806f8b85dfc8e0f2a11eb32e297b405fb3a83fde5-userdata-shm.mount: Deactivated successfully.
Oct 09 16:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-04403df32ea53d49b05f96acc6f4a749bd5b2d8a3109edac594f245aa6cc0646-merged.mount: Deactivated successfully.
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.180 2 DEBUG nova.virt.libvirt.guest [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.181 2 INFO nova.virt.libvirt.driver [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Migration operation has completed
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.182 2 INFO nova.compute.manager [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] _post_live_migration() is started..
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.184 2 DEBUG nova.virt.libvirt.driver [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.184 2 DEBUG nova.virt.libvirt.driver [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.184 2 DEBUG nova.virt.libvirt.driver [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.195 2 WARNING neutronclient.v2_0.client [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.195 2 WARNING neutronclient.v2_0.client [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:17:11 compute-0 podman[143917]: 2025-10-09 16:17:11.198723255 +0000 UTC m=+0.093657378 container cleanup a06e658ab15bf037bd9c283806f8b85dfc8e0f2a11eb32e297b405fb3a83fde5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20251007)
Oct 09 16:17:11 compute-0 systemd[1]: libpod-conmon-a06e658ab15bf037bd9c283806f8b85dfc8e0f2a11eb32e297b405fb3a83fde5.scope: Deactivated successfully.
Oct 09 16:17:11 compute-0 podman[143918]: 2025-10-09 16:17:11.217840352 +0000 UTC m=+0.108911652 container remove a06e658ab15bf037bd9c283806f8b85dfc8e0f2a11eb32e297b405fb3a83fde5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2)
Oct 09 16:17:11 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:11.240 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ef8351-e342-453f-be04-cf3671a86b7e]: (4, ("Thu Oct  9 04:17:11 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc (a06e658ab15bf037bd9c283806f8b85dfc8e0f2a11eb32e297b405fb3a83fde5)\na06e658ab15bf037bd9c283806f8b85dfc8e0f2a11eb32e297b405fb3a83fde5\nThu Oct  9 04:17:11 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc (a06e658ab15bf037bd9c283806f8b85dfc8e0f2a11eb32e297b405fb3a83fde5)\na06e658ab15bf037bd9c283806f8b85dfc8e0f2a11eb32e297b405fb3a83fde5\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:11 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:11.242 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[122e7ff3-5c92-4f82-8615-0b5d5ca7faf3]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:11 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:11.243 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ed4244a-b510-4df6-9ffd-2f86603932fc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:17:11 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:11.243 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[e51e0743-0131-43ca-af23-132f593ea478]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:11 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:11.244 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ed4244a-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:11 compute-0 kernel: tap7ed4244a-b0: left promiscuous mode
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:11 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:11.271 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[2426ff06-3a3a-48a5-933a-d55bf165c9cb]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:11 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:11.307 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3a6d4854-a0d2-4667-9769-067a99a5ae79]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:11 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:11.309 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[686c4365-2480-4a08-a02a-06ad5288da46]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:11 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:11.328 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[e455ddf8-5374-4d79-8d79-85c3add24d3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 148271, 'reachable_time': 15883, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 143966, 'error': None, 'target': 'ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:11 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:11.331 28727 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7ed4244a-b510-4df6-9ffd-2f86603932fc deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 09 16:17:11 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:11.331 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a0ef1c-473c-4d9a-9b59-fcf3dddeb0e6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:17:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d7ed4244a\x2db510\x2d4df6\x2d9ffd\x2d2f86603932fc.mount: Deactivated successfully.
Oct 09 16:17:11 compute-0 sshd-session[143844]: Failed password for root from 134.199.199.215 port 38852 ssh2
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.626 2 WARNING neutronclient.v2_0.client [req-be30f6b7-67e0-453e-811a-95ef11e3cce4 req-9ad8f471-afdf-441b-86a0-8d802df048db ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.832 2 DEBUG nova.network.neutron [req-be30f6b7-67e0-453e-811a-95ef11e3cce4 req-9ad8f471-afdf-441b-86a0-8d802df048db ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Updated VIF entry in instance network info cache for port 9ac52a72-3ea9-4d92-8114-f77f5fe13293. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Oct 09 16:17:11 compute-0 nova_compute[117331]: 2025-10-09 16:17:11.833 2 DEBUG nova.network.neutron [req-be30f6b7-67e0-453e-811a-95ef11e3cce4 req-9ad8f471-afdf-441b-86a0-8d802df048db ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Updating instance_info_cache with network_info: [{"id": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "address": "fa:16:3e:bf:94:75", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac52a72-3e", "ovs_interfaceid": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:17:12 compute-0 sshd-session[143967]: Invalid user oscar from 134.199.199.215 port 38862
Oct 09 16:17:12 compute-0 sshd-session[143967]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:17:12 compute-0 sshd-session[143967]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.339 2 DEBUG oslo_concurrency.lockutils [req-be30f6b7-67e0-453e-811a-95ef11e3cce4 req-9ad8f471-afdf-441b-86a0-8d802df048db ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-12702257-b2eb-4842-bdf8-25e7a3b20038" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.764 2 DEBUG nova.network.neutron [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Activated binding for port a5c38ba6-3efb-4090-a42b-1dd250959041 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.765 2 DEBUG nova.compute.manager [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "a5c38ba6-3efb-4090-a42b-1dd250959041", "address": "fa:16:3e:c9:bd:65", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c38ba6-3e", "ovs_interfaceid": "a5c38ba6-3efb-4090-a42b-1dd250959041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.767 2 DEBUG nova.virt.libvirt.vif [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:15:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1017152878',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1017152878',id=6,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:15:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='be4e2f9059cd48f5b44a612256e3fc7b',ramdisk_id='',reservation_id='r-lo5hd093',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteActionsViaActuator-1347788182',owner_user_name='tempest-TestExecuteActionsViaActuator-1347788182-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:16:50Z,user_data=None,user_id='e5044998ddc3419bb14cc08417add581',uuid=70a451f7-5bde-42f1-a448-d48b3b24d9d6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a5c38ba6-3efb-4090-a42b-1dd250959041", "address": "fa:16:3e:c9:bd:65", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c38ba6-3e", "ovs_interfaceid": "a5c38ba6-3efb-4090-a42b-1dd250959041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.767 2 DEBUG nova.network.os_vif_util [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "a5c38ba6-3efb-4090-a42b-1dd250959041", "address": "fa:16:3e:c9:bd:65", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c38ba6-3e", "ovs_interfaceid": "a5c38ba6-3efb-4090-a42b-1dd250959041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.768 2 DEBUG nova.network.os_vif_util [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:bd:65,bridge_name='br-int',has_traffic_filtering=True,id=a5c38ba6-3efb-4090-a42b-1dd250959041,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c38ba6-3e') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.769 2 DEBUG os_vif [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:bd:65,bridge_name='br-int',has_traffic_filtering=True,id=a5c38ba6-3efb-4090-a42b-1dd250959041,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c38ba6-3e') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.773 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5c38ba6-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.778 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=c685cc5c-8281-41c1-a5f9-dfe9d560c7a2) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.785 2 INFO os_vif [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:bd:65,bridge_name='br-int',has_traffic_filtering=True,id=a5c38ba6-3efb-4090-a42b-1dd250959041,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c38ba6-3e')
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.785 2 DEBUG oslo_concurrency.lockutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.786 2 DEBUG oslo_concurrency.lockutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.786 2 DEBUG oslo_concurrency.lockutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.786 2 DEBUG nova.compute.manager [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.787 2 INFO nova.virt.libvirt.driver [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Deleting instance files /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6_del
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.788 2 INFO nova.virt.libvirt.driver [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Deletion of /var/lib/nova/instances/70a451f7-5bde-42f1-a448-d48b3b24d9d6_del complete
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.816 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.816 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.817 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:17:12 compute-0 nova_compute[117331]: 2025-10-09 16:17:12.817 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:17:13 compute-0 sshd-session[143844]: Connection closed by authenticating user root 134.199.199.215 port 38852 [preauth]
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.331 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.332 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.332 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.333 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:17:13 compute-0 podman[143970]: 2025-10-09 16:17:13.454346414 +0000 UTC m=+0.085151137 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.511 2 DEBUG nova.compute.manager [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received event network-vif-plugged-a5c38ba6-3efb-4090-a42b-1dd250959041 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.511 2 DEBUG oslo_concurrency.lockutils [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.511 2 DEBUG oslo_concurrency.lockutils [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.511 2 DEBUG oslo_concurrency.lockutils [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.511 2 DEBUG nova.compute.manager [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] No waiting events found dispatching network-vif-plugged-a5c38ba6-3efb-4090-a42b-1dd250959041 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.512 2 WARNING nova.compute.manager [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received unexpected event network-vif-plugged-a5c38ba6-3efb-4090-a42b-1dd250959041 for instance with vm_state active and task_state migrating.
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.512 2 DEBUG nova.compute.manager [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received event network-vif-unplugged-a5c38ba6-3efb-4090-a42b-1dd250959041 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.512 2 DEBUG oslo_concurrency.lockutils [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.512 2 DEBUG oslo_concurrency.lockutils [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.512 2 DEBUG oslo_concurrency.lockutils [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.512 2 DEBUG nova.compute.manager [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] No waiting events found dispatching network-vif-unplugged-a5c38ba6-3efb-4090-a42b-1dd250959041 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.512 2 DEBUG nova.compute.manager [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received event network-vif-unplugged-a5c38ba6-3efb-4090-a42b-1dd250959041 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.512 2 DEBUG nova.compute.manager [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received event network-vif-unplugged-a5c38ba6-3efb-4090-a42b-1dd250959041 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.513 2 DEBUG oslo_concurrency.lockutils [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.513 2 DEBUG oslo_concurrency.lockutils [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.513 2 DEBUG oslo_concurrency.lockutils [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.513 2 DEBUG nova.compute.manager [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] No waiting events found dispatching network-vif-unplugged-a5c38ba6-3efb-4090-a42b-1dd250959041 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.513 2 DEBUG nova.compute.manager [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received event network-vif-unplugged-a5c38ba6-3efb-4090-a42b-1dd250959041 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.513 2 DEBUG nova.compute.manager [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received event network-vif-plugged-a5c38ba6-3efb-4090-a42b-1dd250959041 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.513 2 DEBUG oslo_concurrency.lockutils [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.514 2 DEBUG oslo_concurrency.lockutils [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.514 2 DEBUG oslo_concurrency.lockutils [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.514 2 DEBUG nova.compute.manager [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] No waiting events found dispatching network-vif-plugged-a5c38ba6-3efb-4090-a42b-1dd250959041 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.514 2 WARNING nova.compute.manager [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received unexpected event network-vif-plugged-a5c38ba6-3efb-4090-a42b-1dd250959041 for instance with vm_state active and task_state migrating.
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.514 2 DEBUG nova.compute.manager [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received event network-vif-plugged-a5c38ba6-3efb-4090-a42b-1dd250959041 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.514 2 DEBUG oslo_concurrency.lockutils [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.514 2 DEBUG oslo_concurrency.lockutils [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.514 2 DEBUG oslo_concurrency.lockutils [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.515 2 DEBUG nova.compute.manager [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] No waiting events found dispatching network-vif-plugged-a5c38ba6-3efb-4090-a42b-1dd250959041 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:17:13 compute-0 nova_compute[117331]: 2025-10-09 16:17:13.515 2 WARNING nova.compute.manager [req-196c8788-11ed-4ab2-a2c5-5002aee7d8b9 req-68124dd7-c8e6-43c8-958f-eb9d03ac304c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Received unexpected event network-vif-plugged-a5c38ba6-3efb-4090-a42b-1dd250959041 for instance with vm_state active and task_state migrating.
Oct 09 16:17:14 compute-0 sshd-session[143967]: Failed password for invalid user oscar from 134.199.199.215 port 38862 ssh2
Oct 09 16:17:14 compute-0 nova_compute[117331]: 2025-10-09 16:17:14.961 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Periodic task is updating the host stats, it is trying to get disk info for instance-00000008, but the backing disk storage was removed by a concurrent operation such as resize. Error: No disk at /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk: nova.exception.DiskNotFound: No disk at /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk
Oct 09 16:17:15 compute-0 nova_compute[117331]: 2025-10-09 16:17:15.080 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:17:15 compute-0 nova_compute[117331]: 2025-10-09 16:17:15.081 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:17:15 compute-0 nova_compute[117331]: 2025-10-09 16:17:15.104 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.024s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:17:15 compute-0 nova_compute[117331]: 2025-10-09 16:17:15.105 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6126MB free_disk=73.24100875854492GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:17:15 compute-0 nova_compute[117331]: 2025-10-09 16:17:15.105 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:15 compute-0 nova_compute[117331]: 2025-10-09 16:17:15.106 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:15 compute-0 nova_compute[117331]: 2025-10-09 16:17:15.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:15 compute-0 sshd-session[143998]: Invalid user student from 134.199.199.215 port 38864
Oct 09 16:17:15 compute-0 sshd-session[143998]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:17:15 compute-0 sshd-session[143998]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:17:15 compute-0 sshd-session[143967]: Connection closed by invalid user oscar 134.199.199.215 port 38862 [preauth]
Oct 09 16:17:16 compute-0 nova_compute[117331]: 2025-10-09 16:17:16.122 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Migration for instance 12702257-b2eb-4842-bdf8-25e7a3b20038 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 09 16:17:16 compute-0 nova_compute[117331]: 2025-10-09 16:17:16.630 2 INFO nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Updating resource usage from migration 2df1389d-e874-4268-9b0c-0fd9d5f9e283
Oct 09 16:17:16 compute-0 nova_compute[117331]: 2025-10-09 16:17:16.631 2 INFO nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Updating resource usage from migration b42b9cbd-ad2f-415a-bdaa-1a339470737f
Oct 09 16:17:16 compute-0 nova_compute[117331]: 2025-10-09 16:17:16.631 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Starting to track outgoing migration b42b9cbd-ad2f-415a-bdaa-1a339470737f with flavor 5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3 _update_usage_from_migration /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1549
Oct 09 16:17:16 compute-0 nova_compute[117331]: 2025-10-09 16:17:16.656 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Migration 2df1389d-e874-4268-9b0c-0fd9d5f9e283 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:17:16 compute-0 nova_compute[117331]: 2025-10-09 16:17:16.656 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Migration b42b9cbd-ad2f-415a-bdaa-1a339470737f is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:17:16 compute-0 nova_compute[117331]: 2025-10-09 16:17:16.657 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:17:16 compute-0 nova_compute[117331]: 2025-10-09 16:17:16.657 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:17:15 up 26 min,  0 user,  load average: 0.34, 0.29, 0.30\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_migrating': '1', 'num_os_type_None': '1', 'num_proj_be4e2f9059cd48f5b44a612256e3fc7b': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:17:16 compute-0 nova_compute[117331]: 2025-10-09 16:17:16.710 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:17:17 compute-0 nova_compute[117331]: 2025-10-09 16:17:17.219 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:17:17 compute-0 sshd-session[143998]: Failed password for invalid user student from 134.199.199.215 port 38864 ssh2
Oct 09 16:17:17 compute-0 nova_compute[117331]: 2025-10-09 16:17:17.729 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:17:17 compute-0 nova_compute[117331]: 2025-10-09 16:17:17.730 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.624s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:17 compute-0 nova_compute[117331]: 2025-10-09 16:17:17.730 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:17:17 compute-0 nova_compute[117331]: 2025-10-09 16:17:17.731 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 09 16:17:17 compute-0 nova_compute[117331]: 2025-10-09 16:17:17.785 2 DEBUG nova.compute.manager [req-ffe967b1-a12a-4e8b-bc83-9ec6eef092b4 req-6ffa7749-341d-4158-9eb2-a0c8124808b4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received event network-vif-plugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:17:17 compute-0 nova_compute[117331]: 2025-10-09 16:17:17.786 2 DEBUG oslo_concurrency.lockutils [req-ffe967b1-a12a-4e8b-bc83-9ec6eef092b4 req-6ffa7749-341d-4158-9eb2-a0c8124808b4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:17 compute-0 nova_compute[117331]: 2025-10-09 16:17:17.786 2 DEBUG oslo_concurrency.lockutils [req-ffe967b1-a12a-4e8b-bc83-9ec6eef092b4 req-6ffa7749-341d-4158-9eb2-a0c8124808b4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:17 compute-0 nova_compute[117331]: 2025-10-09 16:17:17.787 2 DEBUG oslo_concurrency.lockutils [req-ffe967b1-a12a-4e8b-bc83-9ec6eef092b4 req-6ffa7749-341d-4158-9eb2-a0c8124808b4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:17 compute-0 nova_compute[117331]: 2025-10-09 16:17:17.787 2 DEBUG nova.compute.manager [req-ffe967b1-a12a-4e8b-bc83-9ec6eef092b4 req-6ffa7749-341d-4158-9eb2-a0c8124808b4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] No waiting events found dispatching network-vif-plugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:17:17 compute-0 nova_compute[117331]: 2025-10-09 16:17:17.788 2 WARNING nova.compute.manager [req-ffe967b1-a12a-4e8b-bc83-9ec6eef092b4 req-6ffa7749-341d-4158-9eb2-a0c8124808b4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received unexpected event network-vif-plugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 for instance with vm_state active and task_state resize_finish.
Oct 09 16:17:17 compute-0 nova_compute[117331]: 2025-10-09 16:17:17.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:18 compute-0 nova_compute[117331]: 2025-10-09 16:17:18.238 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 09 16:17:18 compute-0 sshd-session[143998]: Connection closed by invalid user student 134.199.199.215 port 38864 [preauth]
Oct 09 16:17:18 compute-0 sshd-session[144000]: Invalid user devops from 134.199.199.215 port 55566
Oct 09 16:17:18 compute-0 sshd-session[144000]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:17:18 compute-0 sshd-session[144000]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:17:20 compute-0 nova_compute[117331]: 2025-10-09 16:17:20.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:20 compute-0 nova_compute[117331]: 2025-10-09 16:17:20.724 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:17:20 compute-0 nova_compute[117331]: 2025-10-09 16:17:20.725 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:17:20 compute-0 nova_compute[117331]: 2025-10-09 16:17:20.725 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:17:20 compute-0 nova_compute[117331]: 2025-10-09 16:17:20.726 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:17:20 compute-0 sshd-session[144000]: Failed password for invalid user devops from 134.199.199.215 port 55566 ssh2
Oct 09 16:17:21 compute-0 sshd-session[144000]: Connection closed by invalid user devops 134.199.199.215 port 55566 [preauth]
Oct 09 16:17:21 compute-0 nova_compute[117331]: 2025-10-09 16:17:21.555 2 DEBUG nova.compute.manager [req-8d44e787-521d-40a9-be12-97d1ea3429d3 req-415ecd94-76ab-4158-bbec-cbe4c93efd60 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received event network-vif-plugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:17:21 compute-0 nova_compute[117331]: 2025-10-09 16:17:21.555 2 DEBUG oslo_concurrency.lockutils [req-8d44e787-521d-40a9-be12-97d1ea3429d3 req-415ecd94-76ab-4158-bbec-cbe4c93efd60 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:21 compute-0 nova_compute[117331]: 2025-10-09 16:17:21.555 2 DEBUG oslo_concurrency.lockutils [req-8d44e787-521d-40a9-be12-97d1ea3429d3 req-415ecd94-76ab-4158-bbec-cbe4c93efd60 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:21 compute-0 nova_compute[117331]: 2025-10-09 16:17:21.555 2 DEBUG oslo_concurrency.lockutils [req-8d44e787-521d-40a9-be12-97d1ea3429d3 req-415ecd94-76ab-4158-bbec-cbe4c93efd60 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:21 compute-0 nova_compute[117331]: 2025-10-09 16:17:21.555 2 DEBUG nova.compute.manager [req-8d44e787-521d-40a9-be12-97d1ea3429d3 req-415ecd94-76ab-4158-bbec-cbe4c93efd60 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] No waiting events found dispatching network-vif-plugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:17:21 compute-0 nova_compute[117331]: 2025-10-09 16:17:21.556 2 WARNING nova.compute.manager [req-8d44e787-521d-40a9-be12-97d1ea3429d3 req-415ecd94-76ab-4158-bbec-cbe4c93efd60 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Received unexpected event network-vif-plugged-9ac52a72-3ea9-4d92-8114-f77f5fe13293 for instance with vm_state resized and task_state None.
Oct 09 16:17:22 compute-0 unix_chkpwd[144004]: password check failed for user (root)
Oct 09 16:17:22 compute-0 sshd-session[144002]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:17:22 compute-0 nova_compute[117331]: 2025-10-09 16:17:22.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:22 compute-0 podman[144005]: 2025-10-09 16:17:22.856092826 +0000 UTC m=+0.077516094 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:17:23 compute-0 nova_compute[117331]: 2025-10-09 16:17:23.450 2 DEBUG oslo_concurrency.lockutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:23 compute-0 nova_compute[117331]: 2025-10-09 16:17:23.451 2 DEBUG oslo_concurrency.lockutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:23 compute-0 nova_compute[117331]: 2025-10-09 16:17:23.451 2 DEBUG oslo_concurrency.lockutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "70a451f7-5bde-42f1-a448-d48b3b24d9d6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:23 compute-0 nova_compute[117331]: 2025-10-09 16:17:23.970 2 DEBUG oslo_concurrency.lockutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:23 compute-0 nova_compute[117331]: 2025-10-09 16:17:23.971 2 DEBUG oslo_concurrency.lockutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:23 compute-0 nova_compute[117331]: 2025-10-09 16:17:23.971 2 DEBUG oslo_concurrency.lockutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:23 compute-0 nova_compute[117331]: 2025-10-09 16:17:23.972 2 DEBUG nova.compute.resource_tracker [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:17:24 compute-0 sshd-session[144002]: Failed password for root from 134.199.199.215 port 55568 ssh2
Oct 09 16:17:24 compute-0 nova_compute[117331]: 2025-10-09 16:17:24.867 2 DEBUG oslo_concurrency.lockutils [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "12702257-b2eb-4842-bdf8-25e7a3b20038" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:24 compute-0 nova_compute[117331]: 2025-10-09 16:17:24.868 2 DEBUG oslo_concurrency.lockutils [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:24 compute-0 nova_compute[117331]: 2025-10-09 16:17:24.868 2 DEBUG nova.compute.manager [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Going to confirm migration 3 do_confirm_resize /usr/lib/python3.12/site-packages/nova/compute/manager.py:5283
Oct 09 16:17:25 compute-0 nova_compute[117331]: 2025-10-09 16:17:25.012 2 WARNING nova.virt.libvirt.driver [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Periodic task is updating the host stats, it is trying to get disk info for instance-00000008, but the backing disk storage was removed by a concurrent operation such as resize. Error: No disk at /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk: nova.exception.DiskNotFound: No disk at /var/lib/nova/instances/12702257-b2eb-4842-bdf8-25e7a3b20038/disk
Oct 09 16:17:25 compute-0 nova_compute[117331]: 2025-10-09 16:17:25.195 2 WARNING nova.virt.libvirt.driver [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:17:25 compute-0 nova_compute[117331]: 2025-10-09 16:17:25.197 2 DEBUG oslo_concurrency.processutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:17:25 compute-0 nova_compute[117331]: 2025-10-09 16:17:25.226 2 DEBUG oslo_concurrency.processutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.030s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:17:25 compute-0 nova_compute[117331]: 2025-10-09 16:17:25.227 2 DEBUG nova.compute.resource_tracker [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6162MB free_disk=73.23319625854492GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:17:25 compute-0 nova_compute[117331]: 2025-10-09 16:17:25.228 2 DEBUG oslo_concurrency.lockutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:25 compute-0 nova_compute[117331]: 2025-10-09 16:17:25.228 2 DEBUG oslo_concurrency.lockutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:25 compute-0 nova_compute[117331]: 2025-10-09 16:17:25.383 2 DEBUG nova.objects.instance [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'info_cache' on Instance uuid 12702257-b2eb-4842-bdf8-25e7a3b20038 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:17:25 compute-0 nova_compute[117331]: 2025-10-09 16:17:25.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:25 compute-0 sshd-session[144027]: Invalid user odoo18 from 134.199.199.215 port 55582
Oct 09 16:17:25 compute-0 sshd-session[144027]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:17:25 compute-0 sshd-session[144027]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:17:25 compute-0 nova_compute[117331]: 2025-10-09 16:17:25.901 2 WARNING neutronclient.v2_0.client [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:17:26 compute-0 nova_compute[117331]: 2025-10-09 16:17:26.248 2 DEBUG nova.compute.resource_tracker [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration for instance 70a451f7-5bde-42f1-a448-d48b3b24d9d6 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 09 16:17:26 compute-0 nova_compute[117331]: 2025-10-09 16:17:26.249 2 DEBUG nova.compute.resource_tracker [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration for instance 12702257-b2eb-4842-bdf8-25e7a3b20038 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 09 16:17:26 compute-0 nova_compute[117331]: 2025-10-09 16:17:26.297 2 WARNING neutronclient.v2_0.client [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:17:26 compute-0 nova_compute[117331]: 2025-10-09 16:17:26.297 2 WARNING neutronclient.v2_0.client [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:17:26 compute-0 nova_compute[117331]: 2025-10-09 16:17:26.383 2 DEBUG neutronclient.v2_0.client [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 9ac52a72-3ea9-4d92-8114-f77f5fe13293 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.12/site-packages/neutronclient/v2_0/client.py:265
Oct 09 16:17:26 compute-0 nova_compute[117331]: 2025-10-09 16:17:26.384 2 DEBUG oslo_concurrency.lockutils [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-12702257-b2eb-4842-bdf8-25e7a3b20038" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:17:26 compute-0 nova_compute[117331]: 2025-10-09 16:17:26.384 2 DEBUG oslo_concurrency.lockutils [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-12702257-b2eb-4842-bdf8-25e7a3b20038" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:17:26 compute-0 nova_compute[117331]: 2025-10-09 16:17:26.384 2 DEBUG nova.network.neutron [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:17:26 compute-0 sshd-session[144002]: Connection closed by authenticating user root 134.199.199.215 port 55568 [preauth]
Oct 09 16:17:26 compute-0 nova_compute[117331]: 2025-10-09 16:17:26.755 2 DEBUG nova.compute.resource_tracker [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Oct 09 16:17:26 compute-0 nova_compute[117331]: 2025-10-09 16:17:26.890 2 WARNING neutronclient.v2_0.client [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:17:27 compute-0 nova_compute[117331]: 2025-10-09 16:17:27.262 2 INFO nova.compute.resource_tracker [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Updating resource usage from migration b42b9cbd-ad2f-415a-bdaa-1a339470737f
Oct 09 16:17:27 compute-0 nova_compute[117331]: 2025-10-09 16:17:27.263 2 DEBUG nova.compute.resource_tracker [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Starting to track outgoing migration b42b9cbd-ad2f-415a-bdaa-1a339470737f with flavor 5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3 _update_usage_from_migration /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1549
Oct 09 16:17:27 compute-0 nova_compute[117331]: 2025-10-09 16:17:27.284 2 DEBUG nova.compute.resource_tracker [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration 2df1389d-e874-4268-9b0c-0fd9d5f9e283 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:17:27 compute-0 nova_compute[117331]: 2025-10-09 16:17:27.284 2 DEBUG nova.compute.resource_tracker [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration b42b9cbd-ad2f-415a-bdaa-1a339470737f is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:17:27 compute-0 nova_compute[117331]: 2025-10-09 16:17:27.285 2 DEBUG nova.compute.resource_tracker [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:17:27 compute-0 nova_compute[117331]: 2025-10-09 16:17:27.285 2 DEBUG nova.compute.resource_tracker [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:17:25 up 26 min,  0 user,  load average: 0.29, 0.28, 0.30\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:17:27 compute-0 nova_compute[117331]: 2025-10-09 16:17:27.326 2 DEBUG nova.compute.provider_tree [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:17:27 compute-0 sshd-session[144027]: Failed password for invalid user odoo18 from 134.199.199.215 port 55582 ssh2
Oct 09 16:17:27 compute-0 nova_compute[117331]: 2025-10-09 16:17:27.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:27 compute-0 podman[144029]: 2025-10-09 16:17:27.822067028 +0000 UTC m=+0.050934610 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 09 16:17:27 compute-0 nova_compute[117331]: 2025-10-09 16:17:27.832 2 DEBUG nova.scheduler.client.report [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:17:27 compute-0 sshd-session[144027]: Connection closed by invalid user odoo18 134.199.199.215 port 55582 [preauth]
Oct 09 16:17:27 compute-0 nova_compute[117331]: 2025-10-09 16:17:27.968 2 WARNING neutronclient.v2_0.client [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:17:28 compute-0 nova_compute[117331]: 2025-10-09 16:17:28.341 2 DEBUG nova.compute.resource_tracker [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:17:28 compute-0 nova_compute[117331]: 2025-10-09 16:17:28.341 2 DEBUG oslo_concurrency.lockutils [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.113s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:28 compute-0 nova_compute[117331]: 2025-10-09 16:17:28.357 2 INFO nova.compute.manager [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Oct 09 16:17:28 compute-0 nova_compute[117331]: 2025-10-09 16:17:28.769 2 DEBUG nova.network.neutron [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 12702257-b2eb-4842-bdf8-25e7a3b20038] Updating instance_info_cache with network_info: [{"id": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "address": "fa:16:3e:bf:94:75", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac52a72-3e", "ovs_interfaceid": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:17:29 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:33942 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:17:29 compute-0 nova_compute[117331]: 2025-10-09 16:17:29.278 2 DEBUG oslo_concurrency.lockutils [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-12702257-b2eb-4842-bdf8-25e7a3b20038" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:17:29 compute-0 nova_compute[117331]: 2025-10-09 16:17:29.279 2 DEBUG nova.objects.instance [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'migration_context' on Instance uuid 12702257-b2eb-4842-bdf8-25e7a3b20038 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:17:29 compute-0 nova_compute[117331]: 2025-10-09 16:17:29.414 2 INFO nova.scheduler.client.report [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Deleted allocation for migration 2df1389d-e874-4268-9b0c-0fd9d5f9e283
Oct 09 16:17:29 compute-0 nova_compute[117331]: 2025-10-09 16:17:29.415 2 DEBUG nova.virt.libvirt.driver [None req-f2c8be06-10a6-4d4e-83ce-15f398bd0cd3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 70a451f7-5bde-42f1-a448-d48b3b24d9d6] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Oct 09 16:17:29 compute-0 podman[127775]: time="2025-10-09T16:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:17:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:17:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3016 "" "Go-http-client/1.1"
Oct 09 16:17:29 compute-0 nova_compute[117331]: 2025-10-09 16:17:29.786 2 DEBUG nova.objects.base [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Object Instance<12702257-b2eb-4842-bdf8-25e7a3b20038> lazy-loaded attributes: info_cache,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 09 16:17:29 compute-0 nova_compute[117331]: 2025-10-09 16:17:29.798 2 DEBUG nova.virt.libvirt.vif [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-09T16:16:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-925081779',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-925081779',id=8,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:17:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='be4e2f9059cd48f5b44a612256e3fc7b',ramdisk_id='',reservation_id='r-zkb46622',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestExecuteActionsViaActuator-1347788182',owner_user_name='tempest-TestExecuteActionsViaActuator-1347788182-project-admin'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:17:18Z,user_data=None,user_id='e5044998ddc3419bb14cc08417add581',uuid=12702257-b2eb-4842-bdf8-25e7a3b20038,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "address": "fa:16:3e:bf:94:75", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac52a72-3e", "ovs_interfaceid": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:17:29 compute-0 nova_compute[117331]: 2025-10-09 16:17:29.799 2 DEBUG nova.network.os_vif_util [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "address": "fa:16:3e:bf:94:75", "network": {"id": "7ed4244a-b510-4df6-9ffd-2f86603932fc", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1219246163-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "153d68ab33e64b958060574bb1741725", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac52a72-3e", "ovs_interfaceid": "9ac52a72-3ea9-4d92-8114-f77f5fe13293", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:17:29 compute-0 nova_compute[117331]: 2025-10-09 16:17:29.799 2 DEBUG nova.network.os_vif_util [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bf:94:75,bridge_name='br-int',has_traffic_filtering=True,id=9ac52a72-3ea9-4d92-8114-f77f5fe13293,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ac52a72-3e') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:17:29 compute-0 nova_compute[117331]: 2025-10-09 16:17:29.800 2 DEBUG os_vif [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:94:75,bridge_name='br-int',has_traffic_filtering=True,id=9ac52a72-3ea9-4d92-8114-f77f5fe13293,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ac52a72-3e') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:17:29 compute-0 nova_compute[117331]: 2025-10-09 16:17:29.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:29 compute-0 nova_compute[117331]: 2025-10-09 16:17:29.802 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ac52a72-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:17:29 compute-0 nova_compute[117331]: 2025-10-09 16:17:29.802 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:17:29 compute-0 nova_compute[117331]: 2025-10-09 16:17:29.804 2 INFO os_vif [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:94:75,bridge_name='br-int',has_traffic_filtering=True,id=9ac52a72-3ea9-4d92-8114-f77f5fe13293,network=Network(7ed4244a-b510-4df6-9ffd-2f86603932fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ac52a72-3e')
Oct 09 16:17:29 compute-0 nova_compute[117331]: 2025-10-09 16:17:29.804 2 DEBUG oslo_concurrency.lockutils [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:29 compute-0 nova_compute[117331]: 2025-10-09 16:17:29.805 2 DEBUG oslo_concurrency.lockutils [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:30 compute-0 nova_compute[117331]: 2025-10-09 16:17:30.347 2 DEBUG nova.compute.provider_tree [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:17:30 compute-0 nova_compute[117331]: 2025-10-09 16:17:30.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:30 compute-0 nova_compute[117331]: 2025-10-09 16:17:30.855 2 DEBUG nova.scheduler.client.report [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:17:31 compute-0 openstack_network_exporter[129925]: ERROR   16:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:17:31 compute-0 openstack_network_exporter[129925]: ERROR   16:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:17:31 compute-0 openstack_network_exporter[129925]: ERROR   16:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:17:31 compute-0 openstack_network_exporter[129925]: ERROR   16:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:17:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:17:31 compute-0 openstack_network_exporter[129925]: ERROR   16:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:17:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:17:31 compute-0 podman[144054]: 2025-10-09 16:17:31.870394454 +0000 UTC m=+0.093831083 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 09 16:17:31 compute-0 podman[144053]: 2025-10-09 16:17:31.8705619 +0000 UTC m=+0.098016607 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 09 16:17:31 compute-0 nova_compute[117331]: 2025-10-09 16:17:31.876 2 DEBUG oslo_concurrency.lockutils [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 2.071s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:32 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:33954 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:17:32 compute-0 nova_compute[117331]: 2025-10-09 16:17:32.448 2 INFO nova.scheduler.client.report [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Deleted allocation for migration b42b9cbd-ad2f-415a-bdaa-1a339470737f
Oct 09 16:17:32 compute-0 nova_compute[117331]: 2025-10-09 16:17:32.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:32 compute-0 nova_compute[117331]: 2025-10-09 16:17:32.957 2 DEBUG oslo_concurrency.lockutils [None req-9dcbfd8b-f78a-4d51-8d35-32f806aeb801 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "12702257-b2eb-4842-bdf8-25e7a3b20038" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 8.089s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:35.295 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:17:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:35.297 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:17:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:17:35.297 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:17:35 compute-0 nova_compute[117331]: 2025-10-09 16:17:35.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:35 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:33960 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:17:37 compute-0 nova_compute[117331]: 2025-10-09 16:17:37.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:39 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:59452 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:17:40 compute-0 nova_compute[117331]: 2025-10-09 16:17:40.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:41 compute-0 podman[144093]: 2025-10-09 16:17:41.864858473 +0000 UTC m=+0.089948750 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.component=ubi9-minimal-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 09 16:17:42 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:59460 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:17:42 compute-0 nova_compute[117331]: 2025-10-09 16:17:42.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:43 compute-0 podman[144115]: 2025-10-09 16:17:43.903999062 +0000 UTC m=+0.126521842 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Oct 09 16:17:45 compute-0 nova_compute[117331]: 2025-10-09 16:17:45.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:46 compute-0 sshd-session[144142]: Invalid user gitlab-runner from 134.199.199.215 port 59474
Oct 09 16:17:46 compute-0 sshd-session[144142]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:17:46 compute-0 sshd-session[144142]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:17:47 compute-0 nova_compute[117331]: 2025-10-09 16:17:47.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:48 compute-0 sshd-session[144142]: Failed password for invalid user gitlab-runner from 134.199.199.215 port 59474 ssh2
Oct 09 16:17:48 compute-0 sshd-session[144142]: Connection closed by invalid user gitlab-runner 134.199.199.215 port 59474 [preauth]
Oct 09 16:17:49 compute-0 sshd-session[144144]: Invalid user dev from 134.199.199.215 port 55914
Oct 09 16:17:49 compute-0 sshd-session[144144]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:17:49 compute-0 sshd-session[144144]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:17:50 compute-0 nova_compute[117331]: 2025-10-09 16:17:50.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:51 compute-0 sshd-session[144144]: Failed password for invalid user dev from 134.199.199.215 port 55914 ssh2
Oct 09 16:17:52 compute-0 sshd-session[144144]: Connection closed by invalid user dev 134.199.199.215 port 55914 [preauth]
Oct 09 16:17:52 compute-0 sshd-session[144146]: Invalid user samba from 134.199.199.215 port 55918
Oct 09 16:17:52 compute-0 sshd-session[144146]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:17:52 compute-0 sshd-session[144146]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:17:52 compute-0 nova_compute[117331]: 2025-10-09 16:17:52.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:53 compute-0 podman[144148]: 2025-10-09 16:17:53.863036565 +0000 UTC m=+0.084548918 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 09 16:17:54 compute-0 sshd-session[144146]: Failed password for invalid user samba from 134.199.199.215 port 55918 ssh2
Oct 09 16:17:54 compute-0 sshd-session[144146]: Connection closed by invalid user samba 134.199.199.215 port 55918 [preauth]
Oct 09 16:17:55 compute-0 nova_compute[117331]: 2025-10-09 16:17:55.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:55 compute-0 unix_chkpwd[144170]: password check failed for user (root)
Oct 09 16:17:55 compute-0 sshd-session[144168]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:17:57 compute-0 nova_compute[117331]: 2025-10-09 16:17:57.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:17:58 compute-0 sshd-session[144168]: Failed password for root from 134.199.199.215 port 55932 ssh2
Oct 09 16:17:58 compute-0 podman[144171]: 2025-10-09 16:17:58.856691877 +0000 UTC m=+0.080235702 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:17:59 compute-0 sshd-session[144194]: Invalid user ftpuser from 134.199.199.215 port 43166
Oct 09 16:17:59 compute-0 sshd-session[144194]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:17:59 compute-0 sshd-session[144194]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:17:59 compute-0 podman[127775]: time="2025-10-09T16:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:17:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:17:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3022 "" "Go-http-client/1.1"
Oct 09 16:18:00 compute-0 sshd-session[144168]: Connection closed by authenticating user root 134.199.199.215 port 55932 [preauth]
Oct 09 16:18:00 compute-0 nova_compute[117331]: 2025-10-09 16:18:00.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:01 compute-0 openstack_network_exporter[129925]: ERROR   16:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:18:01 compute-0 openstack_network_exporter[129925]: ERROR   16:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:18:01 compute-0 openstack_network_exporter[129925]: ERROR   16:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:18:01 compute-0 openstack_network_exporter[129925]: ERROR   16:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:18:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:18:01 compute-0 openstack_network_exporter[129925]: ERROR   16:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:18:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:18:01 compute-0 sshd-session[144194]: Failed password for invalid user ftpuser from 134.199.199.215 port 43166 ssh2
Oct 09 16:18:02 compute-0 unix_chkpwd[144200]: password check failed for user (root)
Oct 09 16:18:02 compute-0 sshd-session[144198]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:18:02 compute-0 nova_compute[117331]: 2025-10-09 16:18:02.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:02 compute-0 podman[144201]: 2025-10-09 16:18:02.865661213 +0000 UTC m=+0.081716099 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 09 16:18:02 compute-0 podman[144202]: 2025-10-09 16:18:02.879558555 +0000 UTC m=+0.086213242 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true)
Oct 09 16:18:03 compute-0 sshd-session[144194]: Connection closed by invalid user ftpuser 134.199.199.215 port 43166 [preauth]
Oct 09 16:18:04 compute-0 sshd-session[144198]: Failed password for root from 134.199.199.215 port 43176 ssh2
Oct 09 16:18:04 compute-0 sshd-session[144198]: Connection closed by authenticating user root 134.199.199.215 port 43176 [preauth]
Oct 09 16:18:05 compute-0 nova_compute[117331]: 2025-10-09 16:18:05.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:06 compute-0 unix_chkpwd[144242]: password check failed for user (root)
Oct 09 16:18:06 compute-0 sshd-session[144240]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:18:07 compute-0 nova_compute[117331]: 2025-10-09 16:18:07.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:08 compute-0 sshd-session[144240]: Failed password for root from 134.199.199.215 port 43202 ssh2
Oct 09 16:18:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:08.963 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:18:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:08.963 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:18:09 compute-0 nova_compute[117331]: 2025-10-09 16:18:09.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:09 compute-0 unix_chkpwd[144246]: password check failed for user (root)
Oct 09 16:18:09 compute-0 sshd-session[144244]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:18:10 compute-0 sshd-session[144240]: Connection closed by authenticating user root 134.199.199.215 port 43202 [preauth]
Oct 09 16:18:10 compute-0 nova_compute[117331]: 2025-10-09 16:18:10.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:11 compute-0 nova_compute[117331]: 2025-10-09 16:18:11.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:18:12 compute-0 sshd-session[144244]: Failed password for root from 134.199.199.215 port 49584 ssh2
Oct 09 16:18:12 compute-0 nova_compute[117331]: 2025-10-09 16:18:12.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:12 compute-0 podman[144247]: 2025-10-09 16:18:12.873819079 +0000 UTC m=+0.086386929 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, managed_by=edpm_ansible)
Oct 09 16:18:12 compute-0 sshd-session[144260]: Invalid user dspace from 134.199.199.215 port 49610
Oct 09 16:18:13 compute-0 sshd-session[144260]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:18:13 compute-0 sshd-session[144260]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:18:13 compute-0 nova_compute[117331]: 2025-10-09 16:18:13.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:18:13 compute-0 nova_compute[117331]: 2025-10-09 16:18:13.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:18:13 compute-0 nova_compute[117331]: 2025-10-09 16:18:13.821 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:18:13 compute-0 nova_compute[117331]: 2025-10-09 16:18:13.821 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:18:13 compute-0 nova_compute[117331]: 2025-10-09 16:18:13.821 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:18:13 compute-0 sshd-session[144244]: Connection closed by authenticating user root 134.199.199.215 port 49584 [preauth]
Oct 09 16:18:14 compute-0 nova_compute[117331]: 2025-10-09 16:18:14.023 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:18:14 compute-0 nova_compute[117331]: 2025-10-09 16:18:14.024 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:18:14 compute-0 nova_compute[117331]: 2025-10-09 16:18:14.057 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.033s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:18:14 compute-0 nova_compute[117331]: 2025-10-09 16:18:14.058 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6206MB free_disk=73.2621841430664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:18:14 compute-0 nova_compute[117331]: 2025-10-09 16:18:14.059 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:18:14 compute-0 nova_compute[117331]: 2025-10-09 16:18:14.059 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:18:14 compute-0 sshd-session[144260]: Failed password for invalid user dspace from 134.199.199.215 port 49610 ssh2
Oct 09 16:18:14 compute-0 podman[144271]: 2025-10-09 16:18:14.881392171 +0000 UTC m=+0.112819598 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 09 16:18:15 compute-0 nova_compute[117331]: 2025-10-09 16:18:15.111 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:18:15 compute-0 nova_compute[117331]: 2025-10-09 16:18:15.113 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:18:14 up 27 min,  0 user,  load average: 0.17, 0.25, 0.28\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:18:15 compute-0 nova_compute[117331]: 2025-10-09 16:18:15.172 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:18:15 compute-0 nova_compute[117331]: 2025-10-09 16:18:15.683 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:18:15 compute-0 nova_compute[117331]: 2025-10-09 16:18:15.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:15 compute-0 sshd-session[144260]: Connection closed by invalid user dspace 134.199.199.215 port 49610 [preauth]
Oct 09 16:18:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:15.965 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:18:16 compute-0 nova_compute[117331]: 2025-10-09 16:18:16.190 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:18:16 compute-0 nova_compute[117331]: 2025-10-09 16:18:16.191 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.132s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:18:16 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:55306 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:18:17 compute-0 nova_compute[117331]: 2025-10-09 16:18:17.191 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:18:17 compute-0 nova_compute[117331]: 2025-10-09 16:18:17.192 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:18:17 compute-0 nova_compute[117331]: 2025-10-09 16:18:17.192 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:18:17 compute-0 nova_compute[117331]: 2025-10-09 16:18:17.192 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:18:17 compute-0 nova_compute[117331]: 2025-10-09 16:18:17.303 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:18:17 compute-0 nova_compute[117331]: 2025-10-09 16:18:17.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:18 compute-0 nova_compute[117331]: 2025-10-09 16:18:18.302 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:18:18 compute-0 nova_compute[117331]: 2025-10-09 16:18:18.811 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:18:19 compute-0 nova_compute[117331]: 2025-10-09 16:18:19.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:18:19 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:55308 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:18:20 compute-0 nova_compute[117331]: 2025-10-09 16:18:20.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:22 compute-0 nova_compute[117331]: 2025-10-09 16:18:22.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:22 compute-0 sshd[52903]: drop connection #0 from [134.199.199.215]:55322 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:18:23 compute-0 nova_compute[117331]: 2025-10-09 16:18:23.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:24 compute-0 podman[144300]: 2025-10-09 16:18:24.832142641 +0000 UTC m=+0.063211062 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=multipathd, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 09 16:18:25 compute-0 nova_compute[117331]: 2025-10-09 16:18:25.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:26 compute-0 sshd[52903]: drop connection #1 from [134.199.199.215]:40050 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:18:26 compute-0 unix_chkpwd[144323]: password check failed for user (root)
Oct 09 16:18:26 compute-0 sshd-session[144321]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 09 16:18:27 compute-0 nova_compute[117331]: 2025-10-09 16:18:27.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:28 compute-0 sshd-session[144321]: Failed password for root from 80.94.93.176 port 12474 ssh2
Oct 09 16:18:28 compute-0 unix_chkpwd[144324]: password check failed for user (root)
Oct 09 16:18:29 compute-0 sshd[52903]: drop connection #1 from [134.199.199.215]:40062 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:18:29 compute-0 podman[127775]: time="2025-10-09T16:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:18:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:18:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3023 "" "Go-http-client/1.1"
Oct 09 16:18:29 compute-0 podman[144325]: 2025-10-09 16:18:29.844685171 +0000 UTC m=+0.061155127 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:18:30 compute-0 sshd-session[144321]: Failed password for root from 80.94.93.176 port 12474 ssh2
Oct 09 16:18:30 compute-0 nova_compute[117331]: 2025-10-09 16:18:30.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:31.167 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:a1:d1 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4e80e48be374e249fbd628564fc7b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba3ccdc6-51b8-4206-add5-95d4a6a3eef3, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d5e2e4c8-4a14-4222-b045-a4cb3081bc1d) old=Port_Binding(mac=['fa:16:3e:8e:a1:d1'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4e80e48be374e249fbd628564fc7b82', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:18:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:31.168 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d5e2e4c8-4a14-4222-b045-a4cb3081bc1d in datapath 7a8574b4-b4f7-483c-8050-7281b5ac5624 updated
Oct 09 16:18:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:31.169 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7a8574b4-b4f7-483c-8050-7281b5ac5624, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:18:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:31.172 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[2533b387-b6d2-46b4-8ee7-5d8e91d762f2]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:31 compute-0 openstack_network_exporter[129925]: ERROR   16:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:18:31 compute-0 openstack_network_exporter[129925]: ERROR   16:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:18:31 compute-0 openstack_network_exporter[129925]: ERROR   16:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:18:31 compute-0 openstack_network_exporter[129925]: ERROR   16:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:18:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:18:31 compute-0 openstack_network_exporter[129925]: ERROR   16:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:18:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:18:32 compute-0 unix_chkpwd[144349]: password check failed for user (root)
Oct 09 16:18:32 compute-0 nova_compute[117331]: 2025-10-09 16:18:32.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:33 compute-0 sshd[52903]: drop connection #1 from [134.199.199.215]:40068 on [38.102.83.110]:22 penalty: failed authentication
Oct 09 16:18:33 compute-0 podman[144350]: 2025-10-09 16:18:33.824481044 +0000 UTC m=+0.054223960 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 16:18:33 compute-0 podman[144351]: 2025-10-09 16:18:33.833770516 +0000 UTC m=+0.060882780 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:18:35 compute-0 sshd-session[144321]: Failed password for root from 80.94.93.176 port 12474 ssh2
Oct 09 16:18:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:35.298 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:18:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:35.299 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:18:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:35.299 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:18:35 compute-0 nova_compute[117331]: 2025-10-09 16:18:35.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:36 compute-0 unix_chkpwd[144391]: password check failed for user (root)
Oct 09 16:18:36 compute-0 sshd-session[144389]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:18:37 compute-0 sshd-session[144321]: Received disconnect from 80.94.93.176 port 12474:11:  [preauth]
Oct 09 16:18:37 compute-0 sshd-session[144321]: Disconnected from authenticating user root 80.94.93.176 port 12474 [preauth]
Oct 09 16:18:37 compute-0 sshd-session[144321]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 09 16:18:37 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:37.744 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:f8:e3 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-f1df42a8-59cd-4ad2-8fe0-a575f223980f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1df42a8-59cd-4ad2-8fe0-a575f223980f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a3f79d5a5f2475d93599ef409043893', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef4a4a63-fe64-4a99-958f-76200b51ae32, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a2a7d944-0cd1-49e0-98b4-306429471f6b) old=Port_Binding(mac=['fa:16:3e:e8:f8:e3'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-f1df42a8-59cd-4ad2-8fe0-a575f223980f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1df42a8-59cd-4ad2-8fe0-a575f223980f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a3f79d5a5f2475d93599ef409043893', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:18:37 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:37.745 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a2a7d944-0cd1-49e0-98b4-306429471f6b in datapath f1df42a8-59cd-4ad2-8fe0-a575f223980f updated
Oct 09 16:18:37 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:37.747 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f1df42a8-59cd-4ad2-8fe0-a575f223980f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:18:37 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:37.747 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3339b715-7370-448d-a3e5-7bf1f7248a1a]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:37 compute-0 nova_compute[117331]: 2025-10-09 16:18:37.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:37 compute-0 unix_chkpwd[144394]: password check failed for user (root)
Oct 09 16:18:37 compute-0 sshd-session[144392]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 09 16:18:38 compute-0 sshd-session[144389]: Failed password for root from 134.199.199.215 port 58592 ssh2
Oct 09 16:18:38 compute-0 sshd-session[144389]: Connection closed by authenticating user root 134.199.199.215 port 58592 [preauth]
Oct 09 16:18:39 compute-0 sshd-session[144392]: Failed password for root from 80.94.93.176 port 19486 ssh2
Oct 09 16:18:40 compute-0 unix_chkpwd[144397]: password check failed for user (root)
Oct 09 16:18:40 compute-0 sshd-session[144395]: Invalid user minecraft from 134.199.199.215 port 58600
Oct 09 16:18:40 compute-0 sshd-session[144395]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:18:40 compute-0 sshd-session[144395]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:18:40 compute-0 nova_compute[117331]: 2025-10-09 16:18:40.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:42 compute-0 sshd-session[144392]: Failed password for root from 80.94.93.176 port 19486 ssh2
Oct 09 16:18:42 compute-0 sshd-session[144395]: Failed password for invalid user minecraft from 134.199.199.215 port 58600 ssh2
Oct 09 16:18:42 compute-0 nova_compute[117331]: 2025-10-09 16:18:42.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:43 compute-0 sshd-session[144398]: Invalid user oracle from 134.199.199.215 port 58612
Oct 09 16:18:43 compute-0 sshd-session[144398]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:18:43 compute-0 sshd-session[144398]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:18:43 compute-0 podman[144400]: 2025-10-09 16:18:43.68512036 +0000 UTC m=+0.088408952 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Oct 09 16:18:44 compute-0 sshd-session[144395]: Connection closed by invalid user minecraft 134.199.199.215 port 58600 [preauth]
Oct 09 16:18:44 compute-0 unix_chkpwd[144421]: password check failed for user (root)
Oct 09 16:18:45 compute-0 sshd-session[144398]: Failed password for invalid user oracle from 134.199.199.215 port 58612 ssh2
Oct 09 16:18:45 compute-0 nova_compute[117331]: 2025-10-09 16:18:45.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:45 compute-0 podman[144422]: 2025-10-09 16:18:45.868675536 +0000 UTC m=+0.097481116 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, config_id=ovn_controller, io.buildah.version=1.41.4)
Oct 09 16:18:45 compute-0 nova_compute[117331]: 2025-10-09 16:18:45.919 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Acquiring lock "cd624073-6d91-4e27-9050-656e729e250e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:18:45 compute-0 nova_compute[117331]: 2025-10-09 16:18:45.920 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "cd624073-6d91-4e27-9050-656e729e250e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:18:46 compute-0 nova_compute[117331]: 2025-10-09 16:18:46.431 2 DEBUG nova.compute.manager [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:18:46 compute-0 sshd-session[144392]: Failed password for root from 80.94.93.176 port 19486 ssh2
Oct 09 16:18:46 compute-0 nova_compute[117331]: 2025-10-09 16:18:46.974 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:18:46 compute-0 nova_compute[117331]: 2025-10-09 16:18:46.975 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:18:46 compute-0 nova_compute[117331]: 2025-10-09 16:18:46.984 2 DEBUG nova.virt.hardware [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:18:46 compute-0 nova_compute[117331]: 2025-10-09 16:18:46.985 2 INFO nova.compute.claims [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:18:47 compute-0 sshd-session[144448]: Invalid user niaoyun from 134.199.199.215 port 53898
Oct 09 16:18:47 compute-0 sshd-session[144448]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:18:47 compute-0 sshd-session[144448]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215
Oct 09 16:18:47 compute-0 sshd-session[144398]: Connection closed by invalid user oracle 134.199.199.215 port 58612 [preauth]
Oct 09 16:18:47 compute-0 nova_compute[117331]: 2025-10-09 16:18:47.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:48 compute-0 nova_compute[117331]: 2025-10-09 16:18:48.117 2 DEBUG nova.compute.provider_tree [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:18:48 compute-0 nova_compute[117331]: 2025-10-09 16:18:48.626 2 DEBUG nova.scheduler.client.report [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:18:48 compute-0 sshd-session[144392]: Received disconnect from 80.94.93.176 port 19486:11:  [preauth]
Oct 09 16:18:48 compute-0 sshd-session[144392]: Disconnected from authenticating user root 80.94.93.176 port 19486 [preauth]
Oct 09 16:18:48 compute-0 sshd-session[144392]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 09 16:18:48 compute-0 sshd-session[144448]: Failed password for invalid user niaoyun from 134.199.199.215 port 53898 ssh2
Oct 09 16:18:49 compute-0 nova_compute[117331]: 2025-10-09 16:18:49.140 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.165s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:18:49 compute-0 nova_compute[117331]: 2025-10-09 16:18:49.141 2 DEBUG nova.compute.manager [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:18:49 compute-0 unix_chkpwd[144452]: password check failed for user (root)
Oct 09 16:18:49 compute-0 sshd-session[144450]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 09 16:18:49 compute-0 nova_compute[117331]: 2025-10-09 16:18:49.655 2 DEBUG nova.compute.manager [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:18:49 compute-0 nova_compute[117331]: 2025-10-09 16:18:49.656 2 DEBUG nova.network.neutron [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:18:49 compute-0 nova_compute[117331]: 2025-10-09 16:18:49.657 2 WARNING neutronclient.v2_0.client [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:18:49 compute-0 nova_compute[117331]: 2025-10-09 16:18:49.657 2 WARNING neutronclient.v2_0.client [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:18:50 compute-0 nova_compute[117331]: 2025-10-09 16:18:50.167 2 INFO nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:18:50 compute-0 unix_chkpwd[144455]: password check failed for user (root)
Oct 09 16:18:50 compute-0 sshd-session[144453]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.199.199.215  user=root
Oct 09 16:18:50 compute-0 nova_compute[117331]: 2025-10-09 16:18:50.676 2 DEBUG nova.compute.manager [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:18:50 compute-0 sshd-session[144448]: Connection closed by invalid user niaoyun 134.199.199.215 port 53898 [preauth]
Oct 09 16:18:50 compute-0 nova_compute[117331]: 2025-10-09 16:18:50.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:50 compute-0 nova_compute[117331]: 2025-10-09 16:18:50.947 2 DEBUG nova.network.neutron [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Successfully created port: b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:18:51 compute-0 sshd-session[144450]: Failed password for root from 80.94.93.176 port 62456 ssh2
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.694 2 DEBUG nova.compute.manager [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.695 2 DEBUG nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.696 2 INFO nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Creating image(s)
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.696 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Acquiring lock "/var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.696 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "/var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.698 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "/var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.699 2 DEBUG oslo_utils.imageutils.format_inspector [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.703 2 DEBUG oslo_utils.imageutils.format_inspector [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.705 2 DEBUG oslo_concurrency.processutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:18:51 compute-0 unix_chkpwd[144457]: password check failed for user (root)
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.798 2 DEBUG oslo_concurrency.processutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.801 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.803 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.803 2 DEBUG oslo_utils.imageutils.format_inspector [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:18:51 compute-0 sshd-session[144453]: Failed password for root from 134.199.199.215 port 53912 ssh2
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.810 2 DEBUG oslo_utils.imageutils.format_inspector [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.811 2 DEBUG oslo_concurrency.processutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.888 2 DEBUG nova.network.neutron [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Successfully updated port: b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.898 2 DEBUG oslo_concurrency.processutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.898 2 DEBUG oslo_concurrency.processutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.947 2 DEBUG nova.compute.manager [req-adf1f771-c28c-4504-9196-8f418f1f46a6 req-9fa329f4-489f-4e1d-bb90-5ac56f68c6da ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Received event network-changed-b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.947 2 DEBUG nova.compute.manager [req-adf1f771-c28c-4504-9196-8f418f1f46a6 req-9fa329f4-489f-4e1d-bb90-5ac56f68c6da ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Refreshing instance network info cache due to event network-changed-b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.948 2 DEBUG oslo_concurrency.lockutils [req-adf1f771-c28c-4504-9196-8f418f1f46a6 req-9fa329f4-489f-4e1d-bb90-5ac56f68c6da ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-cd624073-6d91-4e27-9050-656e729e250e" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.948 2 DEBUG oslo_concurrency.lockutils [req-adf1f771-c28c-4504-9196-8f418f1f46a6 req-9fa329f4-489f-4e1d-bb90-5ac56f68c6da ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-cd624073-6d91-4e27-9050-656e729e250e" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.948 2 DEBUG nova.network.neutron [req-adf1f771-c28c-4504-9196-8f418f1f46a6 req-9fa329f4-489f-4e1d-bb90-5ac56f68c6da ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Refreshing network info cache for port b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.950 2 DEBUG oslo_concurrency.processutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk 1073741824" returned: 0 in 0.051s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.950 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.148s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:18:51 compute-0 nova_compute[117331]: 2025-10-09 16:18:51.950 2 DEBUG oslo_concurrency.processutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:18:52 compute-0 nova_compute[117331]: 2025-10-09 16:18:52.042 2 DEBUG oslo_concurrency.processutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:18:52 compute-0 nova_compute[117331]: 2025-10-09 16:18:52.043 2 DEBUG nova.virt.disk.api [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Checking if we can resize image /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:18:52 compute-0 nova_compute[117331]: 2025-10-09 16:18:52.043 2 DEBUG oslo_concurrency.processutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:18:52 compute-0 nova_compute[117331]: 2025-10-09 16:18:52.104 2 DEBUG oslo_concurrency.processutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:18:52 compute-0 nova_compute[117331]: 2025-10-09 16:18:52.106 2 DEBUG nova.virt.disk.api [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Cannot resize image /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:18:52 compute-0 nova_compute[117331]: 2025-10-09 16:18:52.106 2 DEBUG nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:18:52 compute-0 nova_compute[117331]: 2025-10-09 16:18:52.106 2 DEBUG nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Ensure instance console log exists: /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:18:52 compute-0 nova_compute[117331]: 2025-10-09 16:18:52.107 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:18:52 compute-0 nova_compute[117331]: 2025-10-09 16:18:52.108 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:18:52 compute-0 nova_compute[117331]: 2025-10-09 16:18:52.108 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:18:52 compute-0 sshd-session[144453]: Connection closed by authenticating user root 134.199.199.215 port 53912 [preauth]
Oct 09 16:18:52 compute-0 nova_compute[117331]: 2025-10-09 16:18:52.394 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Acquiring lock "refresh_cache-cd624073-6d91-4e27-9050-656e729e250e" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:18:52 compute-0 nova_compute[117331]: 2025-10-09 16:18:52.459 2 WARNING neutronclient.v2_0.client [req-adf1f771-c28c-4504-9196-8f418f1f46a6 req-9fa329f4-489f-4e1d-bb90-5ac56f68c6da ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:18:52 compute-0 nova_compute[117331]: 2025-10-09 16:18:52.788 2 DEBUG nova.network.neutron [req-adf1f771-c28c-4504-9196-8f418f1f46a6 req-9fa329f4-489f-4e1d-bb90-5ac56f68c6da ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:18:52 compute-0 nova_compute[117331]: 2025-10-09 16:18:52.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:52 compute-0 nova_compute[117331]: 2025-10-09 16:18:52.977 2 DEBUG nova.network.neutron [req-adf1f771-c28c-4504-9196-8f418f1f46a6 req-9fa329f4-489f-4e1d-bb90-5ac56f68c6da ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:18:53 compute-0 nova_compute[117331]: 2025-10-09 16:18:53.484 2 DEBUG oslo_concurrency.lockutils [req-adf1f771-c28c-4504-9196-8f418f1f46a6 req-9fa329f4-489f-4e1d-bb90-5ac56f68c6da ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-cd624073-6d91-4e27-9050-656e729e250e" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:18:53 compute-0 nova_compute[117331]: 2025-10-09 16:18:53.486 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Acquired lock "refresh_cache-cd624073-6d91-4e27-9050-656e729e250e" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:18:53 compute-0 nova_compute[117331]: 2025-10-09 16:18:53.487 2 DEBUG nova.network.neutron [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:18:53 compute-0 sshd-session[144450]: Failed password for root from 80.94.93.176 port 62456 ssh2
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.070 2 DEBUG nova.network.neutron [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.252 2 WARNING neutronclient.v2_0.client [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:18:54 compute-0 unix_chkpwd[144472]: password check failed for user (root)
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.389 2 DEBUG nova.network.neutron [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Updating instance_info_cache with network_info: [{"id": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "address": "fa:16:3e:75:be:26", "network": {"id": "7a8574b4-b4f7-483c-8050-7281b5ac5624", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1906595087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4e80e48be374e249fbd628564fc7b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0f5ccc9-26", "ovs_interfaceid": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.894 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Releasing lock "refresh_cache-cd624073-6d91-4e27-9050-656e729e250e" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.895 2 DEBUG nova.compute.manager [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Instance network_info: |[{"id": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "address": "fa:16:3e:75:be:26", "network": {"id": "7a8574b4-b4f7-483c-8050-7281b5ac5624", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1906595087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4e80e48be374e249fbd628564fc7b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0f5ccc9-26", "ovs_interfaceid": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.901 2 DEBUG nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Start _get_guest_xml network_info=[{"id": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "address": "fa:16:3e:75:be:26", "network": {"id": "7a8574b4-b4f7-483c-8050-7281b5ac5624", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1906595087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4e80e48be374e249fbd628564fc7b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0f5ccc9-26", "ovs_interfaceid": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.907 2 WARNING nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.909 2 DEBUG nova.virt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteBasicStrategy-server-1593369519', uuid='cd624073-6d91-4e27-9050-656e729e250e'), owner=OwnerMeta(userid='2c30c20b7c364f81b40bcec56afc8ae3', username='tempest-TestExecuteBasicStrategy-20506088-project-admin', projectid='8a3f79d5a5f2475d93599ef409043893', projectname='tempest-TestExecuteBasicStrategy-20506088'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "address": "fa:16:3e:75:be:26", "network": {"id": "7a8574b4-b4f7-483c-8050-7281b5ac5624", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1906595087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4e80e48be374e249fbd628564fc7b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0f5ccc9-26", "ovs_interfaceid": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760026734.9096081) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.914 2 DEBUG nova.virt.libvirt.host [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.915 2 DEBUG nova.virt.libvirt.host [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.918 2 DEBUG nova.virt.libvirt.host [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.918 2 DEBUG nova.virt.libvirt.host [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.919 2 DEBUG nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.919 2 DEBUG nova.virt.hardware [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.919 2 DEBUG nova.virt.hardware [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.919 2 DEBUG nova.virt.hardware [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.920 2 DEBUG nova.virt.hardware [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.920 2 DEBUG nova.virt.hardware [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.920 2 DEBUG nova.virt.hardware [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.920 2 DEBUG nova.virt.hardware [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.920 2 DEBUG nova.virt.hardware [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.921 2 DEBUG nova.virt.hardware [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.921 2 DEBUG nova.virt.hardware [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.921 2 DEBUG nova.virt.hardware [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.925 2 DEBUG nova.virt.libvirt.vif [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:18:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteBasicStrategy-server-1593369519',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutebasicstrategy-server-1593369519',id=10,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a3f79d5a5f2475d93599ef409043893',ramdisk_id='',reservation_id='r-hwjq090r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteBasicStrategy-20506088',owner_user_name='tempest-TestExecuteBasicStrategy-20506088-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:18:50Z,user_data=None,user_id='2c30c20b7c364f81b40bcec56afc8ae3',uuid=cd624073-6d91-4e27-9050-656e729e250e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "address": "fa:16:3e:75:be:26", "network": {"id": "7a8574b4-b4f7-483c-8050-7281b5ac5624", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1906595087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4e80e48be374e249fbd628564fc7b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0f5ccc9-26", "ovs_interfaceid": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.925 2 DEBUG nova.network.os_vif_util [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Converting VIF {"id": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "address": "fa:16:3e:75:be:26", "network": {"id": "7a8574b4-b4f7-483c-8050-7281b5ac5624", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1906595087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4e80e48be374e249fbd628564fc7b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0f5ccc9-26", "ovs_interfaceid": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.926 2 DEBUG nova.network.os_vif_util [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:be:26,bridge_name='br-int',has_traffic_filtering=True,id=b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99,network=Network(7a8574b4-b4f7-483c-8050-7281b5ac5624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0f5ccc9-26') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:18:54 compute-0 nova_compute[117331]: 2025-10-09 16:18:54.927 2 DEBUG nova.objects.instance [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lazy-loading 'pci_devices' on Instance uuid cd624073-6d91-4e27-9050-656e729e250e obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.433 2 DEBUG nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:18:55 compute-0 nova_compute[117331]:   <uuid>cd624073-6d91-4e27-9050-656e729e250e</uuid>
Oct 09 16:18:55 compute-0 nova_compute[117331]:   <name>instance-0000000a</name>
Oct 09 16:18:55 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:18:55 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:18:55 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteBasicStrategy-server-1593369519</nova:name>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:18:54</nova:creationTime>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:18:55 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:18:55 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:18:55 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:18:55 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:18:55 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:18:55 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:18:55 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:18:55 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:18:55 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:18:55 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:18:55 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:18:55 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:18:55 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:18:55 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:18:55 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:18:55 compute-0 nova_compute[117331]:         <nova:user uuid="2c30c20b7c364f81b40bcec56afc8ae3">tempest-TestExecuteBasicStrategy-20506088-project-admin</nova:user>
Oct 09 16:18:55 compute-0 nova_compute[117331]:         <nova:project uuid="8a3f79d5a5f2475d93599ef409043893">tempest-TestExecuteBasicStrategy-20506088</nova:project>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:18:55 compute-0 nova_compute[117331]:         <nova:port uuid="b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99">
Oct 09 16:18:55 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:18:55 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:18:55 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <system>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <entry name="serial">cd624073-6d91-4e27-9050-656e729e250e</entry>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <entry name="uuid">cd624073-6d91-4e27-9050-656e729e250e</entry>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     </system>
Oct 09 16:18:55 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:18:55 compute-0 nova_compute[117331]:   <os>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:   </os>
Oct 09 16:18:55 compute-0 nova_compute[117331]:   <features>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:   </features>
Oct 09 16:18:55 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:18:55 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:18:55 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk.config"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:75:be:26"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <target dev="tapb0f5ccc9-26"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/console.log" append="off"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <video>
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     </video>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:18:55 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:18:55 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:18:55 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:18:55 compute-0 nova_compute[117331]: </domain>
Oct 09 16:18:55 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.435 2 DEBUG nova.compute.manager [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Preparing to wait for external event network-vif-plugged-b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.435 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Acquiring lock "cd624073-6d91-4e27-9050-656e729e250e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.435 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "cd624073-6d91-4e27-9050-656e729e250e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.436 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "cd624073-6d91-4e27-9050-656e729e250e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.436 2 DEBUG nova.virt.libvirt.vif [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:18:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteBasicStrategy-server-1593369519',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutebasicstrategy-server-1593369519',id=10,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a3f79d5a5f2475d93599ef409043893',ramdisk_id='',reservation_id='r-hwjq090r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteBasicStrategy-20506088',owner_user_name='tempest-TestExecuteBasicStrategy-20506088-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:18:50Z,user_data=None,user_id='2c30c20b7c364f81b40bcec56afc8ae3',uuid=cd624073-6d91-4e27-9050-656e729e250e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "address": "fa:16:3e:75:be:26", "network": {"id": "7a8574b4-b4f7-483c-8050-7281b5ac5624", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1906595087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4e80e48be374e249fbd628564fc7b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0f5ccc9-26", "ovs_interfaceid": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.437 2 DEBUG nova.network.os_vif_util [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Converting VIF {"id": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "address": "fa:16:3e:75:be:26", "network": {"id": "7a8574b4-b4f7-483c-8050-7281b5ac5624", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1906595087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4e80e48be374e249fbd628564fc7b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0f5ccc9-26", "ovs_interfaceid": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.438 2 DEBUG nova.network.os_vif_util [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:be:26,bridge_name='br-int',has_traffic_filtering=True,id=b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99,network=Network(7a8574b4-b4f7-483c-8050-7281b5ac5624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0f5ccc9-26') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.438 2 DEBUG os_vif [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:be:26,bridge_name='br-int',has_traffic_filtering=True,id=b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99,network=Network(7a8574b4-b4f7-483c-8050-7281b5ac5624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0f5ccc9-26') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.440 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.441 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '8b976cab-1e3e-5635-896e-378062561e9c', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.447 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0f5ccc9-26, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.448 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapb0f5ccc9-26, col_values=(('qos', UUID('b2e17c28-c473-4116-ba7e-8fdbe2f572c8')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.448 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapb0f5ccc9-26, col_values=(('external_ids', {'iface-id': 'b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:be:26', 'vm-uuid': 'cd624073-6d91-4e27-9050-656e729e250e'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:18:55 compute-0 NetworkManager[1028]: <info>  [1760026735.4513] manager: (tapb0f5ccc9-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.458 2 INFO os_vif [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:be:26,bridge_name='br-int',has_traffic_filtering=True,id=b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99,network=Network(7a8574b4-b4f7-483c-8050-7281b5ac5624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0f5ccc9-26')
Oct 09 16:18:55 compute-0 ovn_controller[19752]: 2025-10-09T16:18:55Z|00101|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 09 16:18:55 compute-0 nova_compute[117331]: 2025-10-09 16:18:55.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:55 compute-0 podman[144475]: 2025-10-09 16:18:55.838313749 +0000 UTC m=+0.073344890 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 09 16:18:56 compute-0 sshd-session[144450]: Failed password for root from 80.94.93.176 port 62456 ssh2
Oct 09 16:18:56 compute-0 sshd-session[144450]: Received disconnect from 80.94.93.176 port 62456:11:  [preauth]
Oct 09 16:18:56 compute-0 sshd-session[144450]: Disconnected from authenticating user root 80.94.93.176 port 62456 [preauth]
Oct 09 16:18:56 compute-0 sshd-session[144450]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 09 16:18:57 compute-0 nova_compute[117331]: 2025-10-09 16:18:57.002 2 DEBUG nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:18:57 compute-0 nova_compute[117331]: 2025-10-09 16:18:57.003 2 DEBUG nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:18:57 compute-0 nova_compute[117331]: 2025-10-09 16:18:57.003 2 DEBUG nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] No VIF found with MAC fa:16:3e:75:be:26, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:18:57 compute-0 nova_compute[117331]: 2025-10-09 16:18:57.004 2 INFO nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Using config drive
Oct 09 16:18:57 compute-0 nova_compute[117331]: 2025-10-09 16:18:57.517 2 WARNING neutronclient.v2_0.client [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:18:57 compute-0 nova_compute[117331]: 2025-10-09 16:18:57.753 2 INFO nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Creating config drive at /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk.config
Oct 09 16:18:57 compute-0 nova_compute[117331]: 2025-10-09 16:18:57.763 2 DEBUG oslo_concurrency.processutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpw6pjslep execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:18:57 compute-0 nova_compute[117331]: 2025-10-09 16:18:57.897 2 DEBUG oslo_concurrency.processutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpw6pjslep" returned: 0 in 0.133s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:18:57 compute-0 kernel: tapb0f5ccc9-26: entered promiscuous mode
Oct 09 16:18:57 compute-0 NetworkManager[1028]: <info>  [1760026737.9806] manager: (tapb0f5ccc9-26): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Oct 09 16:18:57 compute-0 ovn_controller[19752]: 2025-10-09T16:18:57Z|00102|binding|INFO|Claiming lport b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 for this chassis.
Oct 09 16:18:57 compute-0 ovn_controller[19752]: 2025-10-09T16:18:57Z|00103|binding|INFO|b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99: Claiming fa:16:3e:75:be:26 10.100.0.9
Oct 09 16:18:57 compute-0 nova_compute[117331]: 2025-10-09 16:18:57.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:57 compute-0 nova_compute[117331]: 2025-10-09 16:18:57.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:57 compute-0 nova_compute[117331]: 2025-10-09 16:18:57.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:58 compute-0 systemd-udevd[144511]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:18:58 compute-0 NetworkManager[1028]: <info>  [1760026738.0378] device (tapb0f5ccc9-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:18:58 compute-0 NetworkManager[1028]: <info>  [1760026738.0397] device (tapb0f5ccc9-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:18:58 compute-0 nova_compute[117331]: 2025-10-09 16:18:58.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:58 compute-0 ovn_controller[19752]: 2025-10-09T16:18:58Z|00104|binding|INFO|Setting lport b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 ovn-installed in OVS
Oct 09 16:18:58 compute-0 nova_compute[117331]: 2025-10-09 16:18:58.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:58 compute-0 ovn_controller[19752]: 2025-10-09T16:18:58Z|00105|binding|INFO|Setting lport b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 up in Southbound
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.076 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:be:26 10.100.0.9'], port_security=['fa:16:3e:75:be:26 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'cd624073-6d91-4e27-9050-656e729e250e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a3f79d5a5f2475d93599ef409043893', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3635f8cb-8906-4fc5-a724-24c5054b393c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba3ccdc6-51b8-4206-add5-95d4a6a3eef3, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.078 28613 INFO neutron.agent.ovn.metadata.agent [-] Port b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 in datapath 7a8574b4-b4f7-483c-8050-7281b5ac5624 bound to our chassis
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.079 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a8574b4-b4f7-483c-8050-7281b5ac5624
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.093 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ecbc6e61-9ee0-4ab0-991e-a45486158d76]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.094 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7a8574b4-b1 in ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.097 139687 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7a8574b4-b0 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.097 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3c1f98-f9b5-476e-8fc5-9ce6a6fd081d]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.098 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[c70015a1-99d2-4751-9daa-c3d1760da5f0]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.118 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[af4bcfa0-5077-454c-a9d9-22435d9787b0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.135 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[451b7003-de23-4ad3-a517-17a960a6910f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:58 compute-0 systemd-machined[77487]: New machine qemu-6-instance-0000000a.
Oct 09 16:18:58 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-0000000a.
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.175 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[c7612b9a-f6bc-47fe-91d6-b85ec91ee6fa]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:58 compute-0 NetworkManager[1028]: <info>  [1760026738.1815] manager: (tap7a8574b4-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/44)
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.182 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[107171a4-d695-4781-a323-0c9309b9b49f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.218 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[7222b8d1-f683-497b-ada1-9cb046dee89f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.221 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[5142fd07-cea8-41f6-9498-353bf97ba128]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:58 compute-0 NetworkManager[1028]: <info>  [1760026738.2495] device (tap7a8574b4-b0): carrier: link connected
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.256 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[21ef15d3-3c4a-45dc-89fe-642fd4daebfc]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.277 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[7e184c1a-ad8a-485f-9e7a-5548430d641b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a8574b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:a1:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 168185, 'reachable_time': 22880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 144547, 'error': None, 'target': 'ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.297 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7fc364-dac4-41e8-b8d8-5253a79b6735]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:a1d1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 168185, 'tstamp': 168185}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 144548, 'error': None, 'target': 'ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.317 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[42517b86-f5ec-451c-9e03-41de3cff813e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a8574b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:a1:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 168185, 'reachable_time': 22880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 144549, 'error': None, 'target': 'ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.358 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7a8584-cba2-46cd-9d2f-74b493f3fa60]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.437 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[32f18bcd-c4d3-462d-a389-544c85d690e4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.438 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a8574b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.439 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.439 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a8574b4-b0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:18:58 compute-0 nova_compute[117331]: 2025-10-09 16:18:58.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:58 compute-0 NetworkManager[1028]: <info>  [1760026738.4417] manager: (tap7a8574b4-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Oct 09 16:18:58 compute-0 kernel: tap7a8574b4-b0: entered promiscuous mode
Oct 09 16:18:58 compute-0 nova_compute[117331]: 2025-10-09 16:18:58.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.445 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a8574b4-b0, col_values=(('external_ids', {'iface-id': 'd5e2e4c8-4a14-4222-b045-a4cb3081bc1d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:18:58 compute-0 nova_compute[117331]: 2025-10-09 16:18:58.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:58 compute-0 ovn_controller[19752]: 2025-10-09T16:18:58Z|00106|binding|INFO|Releasing lport d5e2e4c8-4a14-4222-b045-a4cb3081bc1d from this chassis (sb_readonly=0)
Oct 09 16:18:58 compute-0 nova_compute[117331]: 2025-10-09 16:18:58.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.459 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3a51d232-1802-4608-8b32-9961440ddd98]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.460 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7a8574b4-b4f7-483c-8050-7281b5ac5624.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7a8574b4-b4f7-483c-8050-7281b5ac5624.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.461 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7a8574b4-b4f7-483c-8050-7281b5ac5624.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7a8574b4-b4f7-483c-8050-7281b5ac5624.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.461 28613 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 7a8574b4-b4f7-483c-8050-7281b5ac5624 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.461 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7a8574b4-b4f7-483c-8050-7281b5ac5624.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7a8574b4-b4f7-483c-8050-7281b5ac5624.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.461 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4bd888-5ecd-4548-9ad5-a2b940c6d936]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.462 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7a8574b4-b4f7-483c-8050-7281b5ac5624.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7a8574b4-b4f7-483c-8050-7281b5ac5624.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.462 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4cad24-aa43-4d23-8729-02ed4a426f4c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.463 28613 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: global
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     log         /dev/log local0 debug
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     log-tag     haproxy-metadata-proxy-7a8574b4-b4f7-483c-8050-7281b5ac5624
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     user        root
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     group       root
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     maxconn     1024
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     pidfile     /var/lib/neutron/external/pids/7a8574b4-b4f7-483c-8050-7281b5ac5624.pid.haproxy
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     daemon
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: defaults
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     log global
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     mode http
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     option httplog
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     option dontlognull
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     option http-server-close
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     option forwardfor
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     retries                 3
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     timeout http-request    30s
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     timeout connect         30s
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     timeout client          32s
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     timeout server          32s
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     timeout http-keep-alive 30s
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: listen listener
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     bind 169.254.169.254:80
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:     http-request add-header X-OVN-Network-ID 7a8574b4-b4f7-483c-8050-7281b5ac5624
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 09 16:18:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:18:58.463 28613 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'env', 'PROCESS_TAG=haproxy-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7a8574b4-b4f7-483c-8050-7281b5ac5624.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 09 16:18:58 compute-0 nova_compute[117331]: 2025-10-09 16:18:58.887 2 DEBUG nova.compute.manager [req-b68b96b1-f1e5-4eec-9898-4b99568c0ecc req-79ae3856-6337-414a-b1ed-239df9944c65 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Received event network-vif-plugged-b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:18:58 compute-0 nova_compute[117331]: 2025-10-09 16:18:58.888 2 DEBUG oslo_concurrency.lockutils [req-b68b96b1-f1e5-4eec-9898-4b99568c0ecc req-79ae3856-6337-414a-b1ed-239df9944c65 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "cd624073-6d91-4e27-9050-656e729e250e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:18:58 compute-0 nova_compute[117331]: 2025-10-09 16:18:58.888 2 DEBUG oslo_concurrency.lockutils [req-b68b96b1-f1e5-4eec-9898-4b99568c0ecc req-79ae3856-6337-414a-b1ed-239df9944c65 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "cd624073-6d91-4e27-9050-656e729e250e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:18:58 compute-0 nova_compute[117331]: 2025-10-09 16:18:58.888 2 DEBUG oslo_concurrency.lockutils [req-b68b96b1-f1e5-4eec-9898-4b99568c0ecc req-79ae3856-6337-414a-b1ed-239df9944c65 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "cd624073-6d91-4e27-9050-656e729e250e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:18:58 compute-0 nova_compute[117331]: 2025-10-09 16:18:58.888 2 DEBUG nova.compute.manager [req-b68b96b1-f1e5-4eec-9898-4b99568c0ecc req-79ae3856-6337-414a-b1ed-239df9944c65 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Processing event network-vif-plugged-b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:18:58 compute-0 podman[144581]: 2025-10-09 16:18:58.901986259 +0000 UTC m=+0.060370863 container create 0dbaada2507e05e6a9918da92344c04967a6156a3e8d31b25e3d94f057396eb7 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:18:58 compute-0 systemd[1]: Started libpod-conmon-0dbaada2507e05e6a9918da92344c04967a6156a3e8d31b25e3d94f057396eb7.scope.
Oct 09 16:18:58 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:18:58 compute-0 podman[144581]: 2025-10-09 16:18:58.868605493 +0000 UTC m=+0.026990097 image pull 4e15c189a95c5dcd1203c76fe304de5c8f8616da9a258ca168cc54a517f49b87 38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 09 16:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c994c2faaac678d0d9fdd1d853633ee68ad20653084e5b4f04bb5f655d8447d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 16:18:58 compute-0 podman[144581]: 2025-10-09 16:18:58.981290264 +0000 UTC m=+0.139674868 container init 0dbaada2507e05e6a9918da92344c04967a6156a3e8d31b25e3d94f057396eb7 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 09 16:18:58 compute-0 podman[144581]: 2025-10-09 16:18:58.991885267 +0000 UTC m=+0.150269871 container start 0dbaada2507e05e6a9918da92344c04967a6156a3e8d31b25e3d94f057396eb7 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 09 16:18:59 compute-0 neutron-haproxy-ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624[144601]: [NOTICE]   (144607) : New worker (144609) forked
Oct 09 16:18:59 compute-0 neutron-haproxy-ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624[144601]: [NOTICE]   (144607) : Loading success.
Oct 09 16:18:59 compute-0 nova_compute[117331]: 2025-10-09 16:18:59.425 2 DEBUG nova.compute.manager [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:18:59 compute-0 nova_compute[117331]: 2025-10-09 16:18:59.431 2 DEBUG nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:18:59 compute-0 nova_compute[117331]: 2025-10-09 16:18:59.435 2 INFO nova.virt.libvirt.driver [-] [instance: cd624073-6d91-4e27-9050-656e729e250e] Instance spawned successfully.
Oct 09 16:18:59 compute-0 nova_compute[117331]: 2025-10-09 16:18:59.435 2 DEBUG nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:18:59 compute-0 podman[127775]: time="2025-10-09T16:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:18:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:18:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3477 "" "Go-http-client/1.1"
Oct 09 16:18:59 compute-0 nova_compute[117331]: 2025-10-09 16:18:59.950 2 DEBUG nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:18:59 compute-0 nova_compute[117331]: 2025-10-09 16:18:59.951 2 DEBUG nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:18:59 compute-0 nova_compute[117331]: 2025-10-09 16:18:59.951 2 DEBUG nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:18:59 compute-0 nova_compute[117331]: 2025-10-09 16:18:59.952 2 DEBUG nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:18:59 compute-0 nova_compute[117331]: 2025-10-09 16:18:59.953 2 DEBUG nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:18:59 compute-0 nova_compute[117331]: 2025-10-09 16:18:59.954 2 DEBUG nova.virt.libvirt.driver [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:19:00 compute-0 nova_compute[117331]: 2025-10-09 16:19:00.466 2 INFO nova.compute.manager [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Took 8.77 seconds to spawn the instance on the hypervisor.
Oct 09 16:19:00 compute-0 nova_compute[117331]: 2025-10-09 16:19:00.467 2 DEBUG nova.compute.manager [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:19:00 compute-0 nova_compute[117331]: 2025-10-09 16:19:00.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:00 compute-0 nova_compute[117331]: 2025-10-09 16:19:00.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:00 compute-0 podman[144618]: 2025-10-09 16:19:00.8237798 +0000 UTC m=+0.055742098 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:19:00 compute-0 nova_compute[117331]: 2025-10-09 16:19:00.942 2 DEBUG nova.compute.manager [req-dccd9a31-30ca-4a3e-be61-fa77792174a0 req-e2ecaee4-5f82-41f5-b71e-52eed026dbfe ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Received event network-vif-plugged-b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:19:00 compute-0 nova_compute[117331]: 2025-10-09 16:19:00.942 2 DEBUG oslo_concurrency.lockutils [req-dccd9a31-30ca-4a3e-be61-fa77792174a0 req-e2ecaee4-5f82-41f5-b71e-52eed026dbfe ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "cd624073-6d91-4e27-9050-656e729e250e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:19:00 compute-0 nova_compute[117331]: 2025-10-09 16:19:00.942 2 DEBUG oslo_concurrency.lockutils [req-dccd9a31-30ca-4a3e-be61-fa77792174a0 req-e2ecaee4-5f82-41f5-b71e-52eed026dbfe ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "cd624073-6d91-4e27-9050-656e729e250e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:19:00 compute-0 nova_compute[117331]: 2025-10-09 16:19:00.942 2 DEBUG oslo_concurrency.lockutils [req-dccd9a31-30ca-4a3e-be61-fa77792174a0 req-e2ecaee4-5f82-41f5-b71e-52eed026dbfe ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "cd624073-6d91-4e27-9050-656e729e250e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:19:00 compute-0 nova_compute[117331]: 2025-10-09 16:19:00.943 2 DEBUG nova.compute.manager [req-dccd9a31-30ca-4a3e-be61-fa77792174a0 req-e2ecaee4-5f82-41f5-b71e-52eed026dbfe ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] No waiting events found dispatching network-vif-plugged-b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:19:00 compute-0 nova_compute[117331]: 2025-10-09 16:19:00.943 2 WARNING nova.compute.manager [req-dccd9a31-30ca-4a3e-be61-fa77792174a0 req-e2ecaee4-5f82-41f5-b71e-52eed026dbfe ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Received unexpected event network-vif-plugged-b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 for instance with vm_state active and task_state None.
Oct 09 16:19:01 compute-0 nova_compute[117331]: 2025-10-09 16:19:01.043 2 INFO nova.compute.manager [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Took 14.10 seconds to build instance.
Oct 09 16:19:01 compute-0 openstack_network_exporter[129925]: ERROR   16:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:19:01 compute-0 openstack_network_exporter[129925]: ERROR   16:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:19:01 compute-0 openstack_network_exporter[129925]: ERROR   16:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:19:01 compute-0 openstack_network_exporter[129925]: ERROR   16:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:19:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:19:01 compute-0 openstack_network_exporter[129925]: ERROR   16:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:19:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:19:01 compute-0 nova_compute[117331]: 2025-10-09 16:19:01.548 2 DEBUG oslo_concurrency.lockutils [None req-ad81bff0-424a-489e-80a2-446f6ae5fd54 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "cd624073-6d91-4e27-9050-656e729e250e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.627s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:19:01 compute-0 anacron[106873]: Job `cron.daily' started
Oct 09 16:19:01 compute-0 anacron[106873]: Job `cron.daily' terminated
Oct 09 16:19:04 compute-0 podman[144647]: 2025-10-09 16:19:04.825493879 +0000 UTC m=+0.057133682 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:19:04 compute-0 podman[144648]: 2025-10-09 16:19:04.833573353 +0000 UTC m=+0.062869492 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid)
Oct 09 16:19:05 compute-0 nova_compute[117331]: 2025-10-09 16:19:05.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:05 compute-0 nova_compute[117331]: 2025-10-09 16:19:05.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:10 compute-0 nova_compute[117331]: 2025-10-09 16:19:10.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:10 compute-0 nova_compute[117331]: 2025-10-09 16:19:10.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:10 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:19:10.819 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:19:10 compute-0 nova_compute[117331]: 2025-10-09 16:19:10.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:10 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:19:10.820 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:19:11 compute-0 nova_compute[117331]: 2025-10-09 16:19:11.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:19:11 compute-0 ovn_controller[19752]: 2025-10-09T16:19:11Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:75:be:26 10.100.0.9
Oct 09 16:19:11 compute-0 ovn_controller[19752]: 2025-10-09T16:19:11Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:75:be:26 10.100.0.9
Oct 09 16:19:13 compute-0 nova_compute[117331]: 2025-10-09 16:19:13.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:19:13 compute-0 nova_compute[117331]: 2025-10-09 16:19:13.823 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:19:13 compute-0 nova_compute[117331]: 2025-10-09 16:19:13.824 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:19:13 compute-0 nova_compute[117331]: 2025-10-09 16:19:13.825 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:19:13 compute-0 nova_compute[117331]: 2025-10-09 16:19:13.825 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:19:13 compute-0 podman[144697]: 2025-10-09 16:19:13.876667777 +0000 UTC m=+0.093941776 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, version=9.6, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 09 16:19:14 compute-0 nova_compute[117331]: 2025-10-09 16:19:14.876 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:19:14 compute-0 nova_compute[117331]: 2025-10-09 16:19:14.970 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:19:14 compute-0 nova_compute[117331]: 2025-10-09 16:19:14.972 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:19:15 compute-0 nova_compute[117331]: 2025-10-09 16:19:15.060 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:19:15 compute-0 nova_compute[117331]: 2025-10-09 16:19:15.257 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:19:15 compute-0 nova_compute[117331]: 2025-10-09 16:19:15.258 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:19:15 compute-0 nova_compute[117331]: 2025-10-09 16:19:15.303 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.044s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:19:15 compute-0 nova_compute[117331]: 2025-10-09 16:19:15.303 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6005MB free_disk=73.23323822021484GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:19:15 compute-0 nova_compute[117331]: 2025-10-09 16:19:15.304 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:19:15 compute-0 nova_compute[117331]: 2025-10-09 16:19:15.304 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:19:15 compute-0 nova_compute[117331]: 2025-10-09 16:19:15.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:15 compute-0 nova_compute[117331]: 2025-10-09 16:19:15.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:16 compute-0 nova_compute[117331]: 2025-10-09 16:19:16.348 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance cd624073-6d91-4e27-9050-656e729e250e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:19:16 compute-0 nova_compute[117331]: 2025-10-09 16:19:16.348 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:19:16 compute-0 nova_compute[117331]: 2025-10-09 16:19:16.348 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:19:15 up 28 min,  0 user,  load average: 0.32, 0.27, 0.28\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_8a3f79d5a5f2475d93599ef409043893': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:19:16 compute-0 nova_compute[117331]: 2025-10-09 16:19:16.431 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:19:16 compute-0 podman[144726]: 2025-10-09 16:19:16.884719593 +0000 UTC m=+0.102882275 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_controller)
Oct 09 16:19:16 compute-0 nova_compute[117331]: 2025-10-09 16:19:16.937 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:19:17 compute-0 nova_compute[117331]: 2025-10-09 16:19:17.447 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:19:17 compute-0 nova_compute[117331]: 2025-10-09 16:19:17.447 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.143s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:19:18 compute-0 nova_compute[117331]: 2025-10-09 16:19:18.447 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:19:18 compute-0 nova_compute[117331]: 2025-10-09 16:19:18.448 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:19:18 compute-0 nova_compute[117331]: 2025-10-09 16:19:18.448 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:19:18 compute-0 nova_compute[117331]: 2025-10-09 16:19:18.448 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:19:18 compute-0 nova_compute[117331]: 2025-10-09 16:19:18.449 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:19:18 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:19:18.823 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:19:19 compute-0 nova_compute[117331]: 2025-10-09 16:19:19.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:19:20 compute-0 nova_compute[117331]: 2025-10-09 16:19:20.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:20 compute-0 nova_compute[117331]: 2025-10-09 16:19:20.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:21 compute-0 nova_compute[117331]: 2025-10-09 16:19:21.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:19:25 compute-0 nova_compute[117331]: 2025-10-09 16:19:25.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:25 compute-0 nova_compute[117331]: 2025-10-09 16:19:25.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:26 compute-0 podman[144752]: 2025-10-09 16:19:26.820110642 +0000 UTC m=+0.051486754 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4)
Oct 09 16:19:29 compute-0 podman[127775]: time="2025-10-09T16:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:19:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:19:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3484 "" "Go-http-client/1.1"
Oct 09 16:19:30 compute-0 nova_compute[117331]: 2025-10-09 16:19:30.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:30 compute-0 nova_compute[117331]: 2025-10-09 16:19:30.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:31 compute-0 openstack_network_exporter[129925]: ERROR   16:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:19:31 compute-0 openstack_network_exporter[129925]: ERROR   16:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:19:31 compute-0 openstack_network_exporter[129925]: ERROR   16:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:19:31 compute-0 openstack_network_exporter[129925]: ERROR   16:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:19:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:19:31 compute-0 openstack_network_exporter[129925]: ERROR   16:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:19:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:19:31 compute-0 podman[144772]: 2025-10-09 16:19:31.506781148 +0000 UTC m=+0.056786710 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 09 16:19:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:19:35.300 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:19:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:19:35.302 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:19:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:19:35.303 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:19:35 compute-0 nova_compute[117331]: 2025-10-09 16:19:35.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:35 compute-0 nova_compute[117331]: 2025-10-09 16:19:35.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:35 compute-0 podman[144797]: 2025-10-09 16:19:35.865811797 +0000 UTC m=+0.092309724 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 09 16:19:35 compute-0 podman[144798]: 2025-10-09 16:19:35.870144493 +0000 UTC m=+0.084472438 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251007, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Oct 09 16:19:40 compute-0 nova_compute[117331]: 2025-10-09 16:19:40.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:40 compute-0 nova_compute[117331]: 2025-10-09 16:19:40.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:44 compute-0 podman[144837]: 2025-10-09 16:19:44.874933565 +0000 UTC m=+0.107577452 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_id=edpm, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6)
Oct 09 16:19:45 compute-0 nova_compute[117331]: 2025-10-09 16:19:45.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:45 compute-0 nova_compute[117331]: 2025-10-09 16:19:45.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:47 compute-0 nova_compute[117331]: 2025-10-09 16:19:47.418 2 DEBUG nova.virt.libvirt.driver [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Creating tmpfile /var/lib/nova/instances/tmplrvnry_w to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Oct 09 16:19:47 compute-0 nova_compute[117331]: 2025-10-09 16:19:47.419 2 WARNING neutronclient.v2_0.client [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:19:47 compute-0 nova_compute[117331]: 2025-10-09 16:19:47.477 2 DEBUG nova.compute.manager [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmplrvnry_w',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Oct 09 16:19:47 compute-0 ovn_controller[19752]: 2025-10-09T16:19:47Z|00107|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Oct 09 16:19:47 compute-0 podman[144859]: 2025-10-09 16:19:47.957048402 +0000 UTC m=+0.187038192 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=watcher_latest)
Oct 09 16:19:49 compute-0 nova_compute[117331]: 2025-10-09 16:19:49.508 2 WARNING neutronclient.v2_0.client [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:19:50 compute-0 nova_compute[117331]: 2025-10-09 16:19:50.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:50 compute-0 nova_compute[117331]: 2025-10-09 16:19:50.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:54 compute-0 nova_compute[117331]: 2025-10-09 16:19:54.275 2 DEBUG nova.compute.manager [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmplrvnry_w',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='2bff6d2b-fd02-44fa-89ec-e5699854e098',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Oct 09 16:19:55 compute-0 nova_compute[117331]: 2025-10-09 16:19:55.292 2 DEBUG oslo_concurrency.lockutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-2bff6d2b-fd02-44fa-89ec-e5699854e098" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:19:55 compute-0 nova_compute[117331]: 2025-10-09 16:19:55.293 2 DEBUG oslo_concurrency.lockutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-2bff6d2b-fd02-44fa-89ec-e5699854e098" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:19:55 compute-0 nova_compute[117331]: 2025-10-09 16:19:55.293 2 DEBUG nova.network.neutron [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:19:55 compute-0 nova_compute[117331]: 2025-10-09 16:19:55.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:55 compute-0 nova_compute[117331]: 2025-10-09 16:19:55.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:55 compute-0 nova_compute[117331]: 2025-10-09 16:19:55.799 2 WARNING neutronclient.v2_0.client [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:19:56 compute-0 nova_compute[117331]: 2025-10-09 16:19:56.506 2 WARNING neutronclient.v2_0.client [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:19:56 compute-0 nova_compute[117331]: 2025-10-09 16:19:56.722 2 DEBUG nova.network.neutron [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Updating instance_info_cache with network_info: [{"id": "88f83656-b34d-493c-8e34-263dae66e382", "address": "fa:16:3e:ba:30:d2", "network": {"id": "7a8574b4-b4f7-483c-8050-7281b5ac5624", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1906595087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4e80e48be374e249fbd628564fc7b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88f83656-b3", "ovs_interfaceid": "88f83656-b34d-493c-8e34-263dae66e382", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.230 2 DEBUG oslo_concurrency.lockutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-2bff6d2b-fd02-44fa-89ec-e5699854e098" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.270 2 DEBUG nova.virt.libvirt.driver [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmplrvnry_w',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='2bff6d2b-fd02-44fa-89ec-e5699854e098',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.272 2 DEBUG nova.virt.libvirt.driver [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Creating instance directory: /var/lib/nova/instances/2bff6d2b-fd02-44fa-89ec-e5699854e098 pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.272 2 DEBUG nova.virt.libvirt.driver [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Creating disk.info with the contents: {'/var/lib/nova/instances/2bff6d2b-fd02-44fa-89ec-e5699854e098/disk': 'qcow2', '/var/lib/nova/instances/2bff6d2b-fd02-44fa-89ec-e5699854e098/disk.config': 'raw'} pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11764
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.273 2 DEBUG nova.virt.libvirt.driver [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11774
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.274 2 DEBUG nova.objects.instance [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2bff6d2b-fd02-44fa-89ec-e5699854e098 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.780 2 DEBUG oslo_utils.imageutils.format_inspector [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.784 2 DEBUG oslo_utils.imageutils.format_inspector [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.791 2 DEBUG oslo_concurrency.processutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:19:57 compute-0 podman[144886]: 2025-10-09 16:19:57.848687411 +0000 UTC m=+0.078056838 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.862 2 DEBUG oslo_concurrency.processutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.863 2 DEBUG oslo_concurrency.lockutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.863 2 DEBUG oslo_concurrency.lockutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.864 2 DEBUG oslo_utils.imageutils.format_inspector [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.868 2 DEBUG oslo_utils.imageutils.format_inspector [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.869 2 DEBUG oslo_concurrency.processutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.924 2 DEBUG oslo_concurrency.processutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.925 2 DEBUG oslo_concurrency.processutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/2bff6d2b-fd02-44fa-89ec-e5699854e098/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.956 2 DEBUG oslo_concurrency.processutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/2bff6d2b-fd02-44fa-89ec-e5699854e098/disk 1073741824" returned: 0 in 0.030s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.957 2 DEBUG oslo_concurrency.lockutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.094s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:19:57 compute-0 nova_compute[117331]: 2025-10-09 16:19:57.958 2 DEBUG oslo_concurrency.processutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.015 2 DEBUG oslo_concurrency.processutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.017 2 DEBUG nova.virt.disk.api [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Checking if we can resize image /var/lib/nova/instances/2bff6d2b-fd02-44fa-89ec-e5699854e098/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.017 2 DEBUG oslo_concurrency.processutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2bff6d2b-fd02-44fa-89ec-e5699854e098/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.078 2 DEBUG oslo_concurrency.processutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2bff6d2b-fd02-44fa-89ec-e5699854e098/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.079 2 DEBUG nova.virt.disk.api [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Cannot resize image /var/lib/nova/instances/2bff6d2b-fd02-44fa-89ec-e5699854e098/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.079 2 DEBUG nova.objects.instance [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'migration_context' on Instance uuid 2bff6d2b-fd02-44fa-89ec-e5699854e098 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.588 2 DEBUG nova.objects.base [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Object Instance<2bff6d2b-fd02-44fa-89ec-e5699854e098> lazy-loaded attributes: trusted_certs,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.589 2 DEBUG oslo_concurrency.processutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/2bff6d2b-fd02-44fa-89ec-e5699854e098/disk.config 497664 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.625 2 DEBUG oslo_concurrency.processutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/2bff6d2b-fd02-44fa-89ec-e5699854e098/disk.config 497664" returned: 0 in 0.037s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.627 2 DEBUG nova.virt.libvirt.driver [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.628 2 DEBUG nova.virt.libvirt.vif [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-09T16:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteBasicStrategy-server-546735381',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutebasicstrategy-server-546735381',id=11,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:19:20Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8a3f79d5a5f2475d93599ef409043893',ramdisk_id='',reservation_id='r-0kaf3jgd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteBasicStrategy-20506088',owner_user_name='tempest-TestExecuteBasicStrategy-20506088-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:19:20Z,user_data=None,user_id='2c30c20b7c364f81b40bcec56afc8ae3',uuid=2bff6d2b-fd02-44fa-89ec-e5699854e098,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88f83656-b34d-493c-8e34-263dae66e382", "address": "fa:16:3e:ba:30:d2", "network": {"id": "7a8574b4-b4f7-483c-8050-7281b5ac5624", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1906595087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4e80e48be374e249fbd628564fc7b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap88f83656-b3", "ovs_interfaceid": "88f83656-b34d-493c-8e34-263dae66e382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.629 2 DEBUG nova.network.os_vif_util [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "88f83656-b34d-493c-8e34-263dae66e382", "address": "fa:16:3e:ba:30:d2", "network": {"id": "7a8574b4-b4f7-483c-8050-7281b5ac5624", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1906595087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4e80e48be374e249fbd628564fc7b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap88f83656-b3", "ovs_interfaceid": "88f83656-b34d-493c-8e34-263dae66e382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.630 2 DEBUG nova.network.os_vif_util [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:30:d2,bridge_name='br-int',has_traffic_filtering=True,id=88f83656-b34d-493c-8e34-263dae66e382,network=Network(7a8574b4-b4f7-483c-8050-7281b5ac5624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88f83656-b3') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.630 2 DEBUG os_vif [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:30:d2,bridge_name='br-int',has_traffic_filtering=True,id=88f83656-b34d-493c-8e34-263dae66e382,network=Network(7a8574b4-b4f7-483c-8050-7281b5ac5624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88f83656-b3') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.633 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.634 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '321b52fa-ea14-527e-8d48-2afa3e764798', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88f83656-b3, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap88f83656-b3, col_values=(('qos', UUID('0a513a68-f203-4379-906d-566bcc8a660e')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap88f83656-b3, col_values=(('external_ids', {'iface-id': '88f83656-b34d-493c-8e34-263dae66e382', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:30:d2', 'vm-uuid': '2bff6d2b-fd02-44fa-89ec-e5699854e098'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:58 compute-0 NetworkManager[1028]: <info>  [1760026798.6508] manager: (tap88f83656-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.659 2 INFO os_vif [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:30:d2,bridge_name='br-int',has_traffic_filtering=True,id=88f83656-b34d-493c-8e34-263dae66e382,network=Network(7a8574b4-b4f7-483c-8050-7281b5ac5624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88f83656-b3')
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.660 2 DEBUG nova.virt.libvirt.driver [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.660 2 DEBUG nova.compute.manager [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmplrvnry_w',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='2bff6d2b-fd02-44fa-89ec-e5699854e098',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.662 2 WARNING neutronclient.v2_0.client [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:19:58 compute-0 nova_compute[117331]: 2025-10-09 16:19:58.818 2 WARNING neutronclient.v2_0.client [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:19:59 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:19:59.060 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:19:59 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:19:59.061 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:19:59 compute-0 nova_compute[117331]: 2025-10-09 16:19:59.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:19:59 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:19:59.063 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:19:59 compute-0 nova_compute[117331]: 2025-10-09 16:19:59.383 2 DEBUG nova.network.neutron [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Port 88f83656-b34d-493c-8e34-263dae66e382 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Oct 09 16:19:59 compute-0 nova_compute[117331]: 2025-10-09 16:19:59.399 2 DEBUG nova.compute.manager [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmplrvnry_w',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='2bff6d2b-fd02-44fa-89ec-e5699854e098',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Oct 09 16:19:59 compute-0 podman[127775]: time="2025-10-09T16:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:19:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:19:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3485 "" "Go-http-client/1.1"
Oct 09 16:20:00 compute-0 nova_compute[117331]: 2025-10-09 16:20:00.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:01 compute-0 openstack_network_exporter[129925]: ERROR   16:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:20:01 compute-0 openstack_network_exporter[129925]: ERROR   16:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:20:01 compute-0 openstack_network_exporter[129925]: ERROR   16:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:20:01 compute-0 openstack_network_exporter[129925]: ERROR   16:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:20:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:20:01 compute-0 openstack_network_exporter[129925]: ERROR   16:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:20:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:20:01 compute-0 systemd[1]: Starting system activity accounting tool...
Oct 09 16:20:01 compute-0 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 09 16:20:01 compute-0 systemd[1]: Finished system activity accounting tool.
Oct 09 16:20:01 compute-0 podman[144928]: 2025-10-09 16:20:01.842803621 +0000 UTC m=+0.069283012 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 09 16:20:02 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 09 16:20:02 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 09 16:20:02 compute-0 kernel: tap88f83656-b3: entered promiscuous mode
Oct 09 16:20:02 compute-0 NetworkManager[1028]: <info>  [1760026802.8955] manager: (tap88f83656-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Oct 09 16:20:02 compute-0 nova_compute[117331]: 2025-10-09 16:20:02.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:02 compute-0 ovn_controller[19752]: 2025-10-09T16:20:02Z|00108|binding|INFO|Claiming lport 88f83656-b34d-493c-8e34-263dae66e382 for this additional chassis.
Oct 09 16:20:02 compute-0 ovn_controller[19752]: 2025-10-09T16:20:02Z|00109|binding|INFO|88f83656-b34d-493c-8e34-263dae66e382: Claiming fa:16:3e:ba:30:d2 10.100.0.6
Oct 09 16:20:02 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:02.902 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:30:d2 10.100.0.6'], port_security=['fa:16:3e:ba:30:d2 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2bff6d2b-fd02-44fa-89ec-e5699854e098', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a3f79d5a5f2475d93599ef409043893', 'neutron:revision_number': '10', 'neutron:security_group_ids': '3635f8cb-8906-4fc5-a724-24c5054b393c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba3ccdc6-51b8-4206-add5-95d4a6a3eef3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=88f83656-b34d-493c-8e34-263dae66e382) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:20:02 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:02.903 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 88f83656-b34d-493c-8e34-263dae66e382 in datapath 7a8574b4-b4f7-483c-8050-7281b5ac5624 unbound from our chassis
Oct 09 16:20:02 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:02.905 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a8574b4-b4f7-483c-8050-7281b5ac5624
Oct 09 16:20:02 compute-0 ovn_controller[19752]: 2025-10-09T16:20:02Z|00110|binding|INFO|Setting lport 88f83656-b34d-493c-8e34-263dae66e382 ovn-installed in OVS
Oct 09 16:20:02 compute-0 nova_compute[117331]: 2025-10-09 16:20:02.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:02 compute-0 nova_compute[117331]: 2025-10-09 16:20:02.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:02 compute-0 nova_compute[117331]: 2025-10-09 16:20:02.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:02 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:02.932 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a785c730-de73-4f53-affd-04f58dd2fc52]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:02 compute-0 systemd-udevd[144983]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:20:02 compute-0 systemd-machined[77487]: New machine qemu-7-instance-0000000b.
Oct 09 16:20:02 compute-0 NetworkManager[1028]: <info>  [1760026802.9565] device (tap88f83656-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:20:02 compute-0 NetworkManager[1028]: <info>  [1760026802.9580] device (tap88f83656-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:20:02 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-0000000b.
Oct 09 16:20:02 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:02.975 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[f1ff8b2e-b891-49a7-9bc2-0f8eabc31ae9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:02 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:02.978 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[1382c24e-473e-4adb-9dab-bab9a7fdfc44]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:03 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:03.010 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[937a8e05-be50-41a4-95ea-ba90b504cc69]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:03 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:03.029 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[65f5d4fc-c9e1-42ea-b474-5b3fdbcf3428]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a8574b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:a1:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 168185, 'reachable_time': 38344, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 144999, 'error': None, 'target': 'ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:03 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:03.049 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[1a63a430-493c-4da4-b035-eb95eacfa65a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7a8574b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 168199, 'tstamp': 168199}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 145001, 'error': None, 'target': 'ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7a8574b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 168203, 'tstamp': 168203}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 145001, 'error': None, 'target': 'ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:03 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:03.051 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a8574b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:20:03 compute-0 nova_compute[117331]: 2025-10-09 16:20:03.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:03 compute-0 nova_compute[117331]: 2025-10-09 16:20:03.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:03 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:03.054 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a8574b4-b0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:20:03 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:03.054 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:20:03 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:03.055 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a8574b4-b0, col_values=(('external_ids', {'iface-id': 'd5e2e4c8-4a14-4222-b045-a4cb3081bc1d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:20:03 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:03.055 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:20:03 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:03.056 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[23fcc525-fa29-49b3-84d5-0d265361cb76]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-7a8574b4-b4f7-483c-8050-7281b5ac5624\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/7a8574b4-b4f7-483c-8050-7281b5ac5624.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 7a8574b4-b4f7-483c-8050-7281b5ac5624\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:03 compute-0 nova_compute[117331]: 2025-10-09 16:20:03.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:05 compute-0 nova_compute[117331]: 2025-10-09 16:20:05.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:05 compute-0 ovn_controller[19752]: 2025-10-09T16:20:05Z|00111|binding|INFO|Claiming lport 88f83656-b34d-493c-8e34-263dae66e382 for this chassis.
Oct 09 16:20:05 compute-0 ovn_controller[19752]: 2025-10-09T16:20:05Z|00112|binding|INFO|88f83656-b34d-493c-8e34-263dae66e382: Claiming fa:16:3e:ba:30:d2 10.100.0.6
Oct 09 16:20:05 compute-0 ovn_controller[19752]: 2025-10-09T16:20:05Z|00113|binding|INFO|Setting lport 88f83656-b34d-493c-8e34-263dae66e382 up in Southbound
Oct 09 16:20:06 compute-0 podman[145023]: 2025-10-09 16:20:06.861201456 +0000 UTC m=+0.084654915 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 09 16:20:06 compute-0 podman[145024]: 2025-10-09 16:20:06.877397283 +0000 UTC m=+0.085140690 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 09 16:20:07 compute-0 nova_compute[117331]: 2025-10-09 16:20:07.315 2 INFO nova.compute.manager [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Post operation of migration started
Oct 09 16:20:07 compute-0 nova_compute[117331]: 2025-10-09 16:20:07.316 2 WARNING neutronclient.v2_0.client [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:20:07 compute-0 nova_compute[117331]: 2025-10-09 16:20:07.639 2 WARNING neutronclient.v2_0.client [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:20:07 compute-0 nova_compute[117331]: 2025-10-09 16:20:07.640 2 WARNING neutronclient.v2_0.client [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:20:07 compute-0 nova_compute[117331]: 2025-10-09 16:20:07.717 2 DEBUG oslo_concurrency.lockutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-2bff6d2b-fd02-44fa-89ec-e5699854e098" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:20:07 compute-0 nova_compute[117331]: 2025-10-09 16:20:07.718 2 DEBUG oslo_concurrency.lockutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-2bff6d2b-fd02-44fa-89ec-e5699854e098" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:20:07 compute-0 nova_compute[117331]: 2025-10-09 16:20:07.718 2 DEBUG nova.network.neutron [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:20:08 compute-0 nova_compute[117331]: 2025-10-09 16:20:08.226 2 WARNING neutronclient.v2_0.client [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:20:08 compute-0 nova_compute[117331]: 2025-10-09 16:20:08.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:08 compute-0 nova_compute[117331]: 2025-10-09 16:20:08.690 2 WARNING neutronclient.v2_0.client [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:20:08 compute-0 nova_compute[117331]: 2025-10-09 16:20:08.826 2 DEBUG nova.network.neutron [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Updating instance_info_cache with network_info: [{"id": "88f83656-b34d-493c-8e34-263dae66e382", "address": "fa:16:3e:ba:30:d2", "network": {"id": "7a8574b4-b4f7-483c-8050-7281b5ac5624", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1906595087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4e80e48be374e249fbd628564fc7b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88f83656-b3", "ovs_interfaceid": "88f83656-b34d-493c-8e34-263dae66e382", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:20:09 compute-0 nova_compute[117331]: 2025-10-09 16:20:09.357 2 DEBUG oslo_concurrency.lockutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-2bff6d2b-fd02-44fa-89ec-e5699854e098" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:20:09 compute-0 nova_compute[117331]: 2025-10-09 16:20:09.898 2 DEBUG oslo_concurrency.lockutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:20:09 compute-0 nova_compute[117331]: 2025-10-09 16:20:09.899 2 DEBUG oslo_concurrency.lockutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:20:09 compute-0 nova_compute[117331]: 2025-10-09 16:20:09.899 2 DEBUG oslo_concurrency.lockutils [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:20:09 compute-0 nova_compute[117331]: 2025-10-09 16:20:09.906 2 INFO nova.virt.libvirt.driver [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 09 16:20:09 compute-0 virtqemud[117629]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 09 16:20:09 compute-0 virtqemud[117629]: hostname: compute-0
Oct 09 16:20:09 compute-0 virtqemud[117629]: Domain id=7 name='instance-0000000b' uuid=2bff6d2b-fd02-44fa-89ec-e5699854e098 is tainted: custom-monitor
Oct 09 16:20:10 compute-0 nova_compute[117331]: 2025-10-09 16:20:10.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:10 compute-0 nova_compute[117331]: 2025-10-09 16:20:10.915 2 INFO nova.virt.libvirt.driver [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 09 16:20:11 compute-0 nova_compute[117331]: 2025-10-09 16:20:11.923 2 INFO nova.virt.libvirt.driver [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 09 16:20:11 compute-0 nova_compute[117331]: 2025-10-09 16:20:11.929 2 DEBUG nova.compute.manager [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:20:12 compute-0 nova_compute[117331]: 2025-10-09 16:20:12.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:20:12 compute-0 nova_compute[117331]: 2025-10-09 16:20:12.445 2 DEBUG nova.objects.instance [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 09 16:20:13 compute-0 nova_compute[117331]: 2025-10-09 16:20:13.467 2 WARNING neutronclient.v2_0.client [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:20:13 compute-0 nova_compute[117331]: 2025-10-09 16:20:13.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:13 compute-0 nova_compute[117331]: 2025-10-09 16:20:13.816 2 WARNING neutronclient.v2_0.client [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:20:13 compute-0 nova_compute[117331]: 2025-10-09 16:20:13.817 2 WARNING neutronclient.v2_0.client [None req-bb8a87ba-b9a2-475b-80ac-9cb3a5c32f28 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:20:14 compute-0 nova_compute[117331]: 2025-10-09 16:20:14.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:20:14 compute-0 nova_compute[117331]: 2025-10-09 16:20:14.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:20:14 compute-0 nova_compute[117331]: 2025-10-09 16:20:14.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:20:15 compute-0 nova_compute[117331]: 2025-10-09 16:20:15.311 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:20:15 compute-0 nova_compute[117331]: 2025-10-09 16:20:15.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:15 compute-0 nova_compute[117331]: 2025-10-09 16:20:15.822 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:20:15 compute-0 nova_compute[117331]: 2025-10-09 16:20:15.822 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:20:15 compute-0 nova_compute[117331]: 2025-10-09 16:20:15.822 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:20:15 compute-0 nova_compute[117331]: 2025-10-09 16:20:15.823 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:20:15 compute-0 podman[145063]: 2025-10-09 16:20:15.877722954 +0000 UTC m=+0.100100988 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, version=9.6, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=)
Oct 09 16:20:16 compute-0 nova_compute[117331]: 2025-10-09 16:20:16.874 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:20:16 compute-0 nova_compute[117331]: 2025-10-09 16:20:16.967 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:20:16 compute-0 nova_compute[117331]: 2025-10-09 16:20:16.968 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.031 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.039 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2bff6d2b-fd02-44fa-89ec-e5699854e098/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.099 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2bff6d2b-fd02-44fa-89ec-e5699854e098/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.100 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2bff6d2b-fd02-44fa-89ec-e5699854e098/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.150 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2bff6d2b-fd02-44fa-89ec-e5699854e098/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.174 2 DEBUG oslo_concurrency.lockutils [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Acquiring lock "2bff6d2b-fd02-44fa-89ec-e5699854e098" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.175 2 DEBUG oslo_concurrency.lockutils [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "2bff6d2b-fd02-44fa-89ec-e5699854e098" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.176 2 DEBUG oslo_concurrency.lockutils [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Acquiring lock "2bff6d2b-fd02-44fa-89ec-e5699854e098-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.176 2 DEBUG oslo_concurrency.lockutils [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "2bff6d2b-fd02-44fa-89ec-e5699854e098-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.176 2 DEBUG oslo_concurrency.lockutils [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "2bff6d2b-fd02-44fa-89ec-e5699854e098-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.195 2 INFO nova.compute.manager [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Terminating instance
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.333 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.335 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.367 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.032s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.368 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5854MB free_disk=73.20440673828125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.368 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.369 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.712 2 DEBUG nova.compute.manager [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 09 16:20:17 compute-0 kernel: tap88f83656-b3 (unregistering): left promiscuous mode
Oct 09 16:20:17 compute-0 NetworkManager[1028]: <info>  [1760026817.7400] device (tap88f83656-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:20:17 compute-0 ovn_controller[19752]: 2025-10-09T16:20:17Z|00114|binding|INFO|Releasing lport 88f83656-b34d-493c-8e34-263dae66e382 from this chassis (sb_readonly=0)
Oct 09 16:20:17 compute-0 ovn_controller[19752]: 2025-10-09T16:20:17Z|00115|binding|INFO|Setting lport 88f83656-b34d-493c-8e34-263dae66e382 down in Southbound
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:17 compute-0 ovn_controller[19752]: 2025-10-09T16:20:17Z|00116|binding|INFO|Removing iface tap88f83656-b3 ovn-installed in OVS
Oct 09 16:20:17 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:17.766 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:30:d2 10.100.0.6'], port_security=['fa:16:3e:ba:30:d2 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2bff6d2b-fd02-44fa-89ec-e5699854e098', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a3f79d5a5f2475d93599ef409043893', 'neutron:revision_number': '14', 'neutron:security_group_ids': '3635f8cb-8906-4fc5-a724-24c5054b393c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba3ccdc6-51b8-4206-add5-95d4a6a3eef3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=88f83656-b34d-493c-8e34-263dae66e382) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:20:17 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:17.767 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 88f83656-b34d-493c-8e34-263dae66e382 in datapath 7a8574b4-b4f7-483c-8050-7281b5ac5624 unbound from our chassis
Oct 09 16:20:17 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:17.770 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a8574b4-b4f7-483c-8050-7281b5ac5624
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:17 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:17.789 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[e91e27b5-61aa-4282-afbd-a9d9281dd7fc]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:17 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:17.820 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4f6b79-5f8f-40c7-a494-b6d257408926]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:17 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:17.823 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[007a1be3-2d33-47da-85bb-9ac8a52e8528]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:17 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct 09 16:20:17 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000b.scope: Consumed 2.206s CPU time.
Oct 09 16:20:17 compute-0 systemd-machined[77487]: Machine qemu-7-instance-0000000b terminated.
Oct 09 16:20:17 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:17.850 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[8c85a551-0f7b-4112-b599-d9a6c7660091]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:17 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:17.868 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2d8a95-b9e7-4e7b-97f4-71e7e6a81fda]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a8574b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:a1:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 168185, 'reachable_time': 38344, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 145110, 'error': None, 'target': 'ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:17 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:17.883 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[622befe7-224c-4163-80c8-761a566b1bad]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7a8574b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 168199, 'tstamp': 168199}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 145111, 'error': None, 'target': 'ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7a8574b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 168203, 'tstamp': 168203}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 145111, 'error': None, 'target': 'ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:17 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:17.885 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a8574b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:17 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:17.892 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a8574b4-b0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:20:17 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:17.893 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:20:17 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:17.893 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a8574b4-b0, col_values=(('external_ids', {'iface-id': 'd5e2e4c8-4a14-4222-b045-a4cb3081bc1d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:20:17 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:17.893 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:20:17 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:17.894 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[1259b930-5155-4b66-99bc-1ca673dfbdb0]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-7a8574b4-b4f7-483c-8050-7281b5ac5624\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/7a8574b4-b4f7-483c-8050-7281b5ac5624.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 7a8574b4-b4f7-483c-8050-7281b5ac5624\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.973 2 INFO nova.virt.libvirt.driver [-] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Instance destroyed successfully.
Oct 09 16:20:17 compute-0 nova_compute[117331]: 2025-10-09 16:20:17.973 2 DEBUG nova.objects.instance [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lazy-loading 'resources' on Instance uuid 2bff6d2b-fd02-44fa-89ec-e5699854e098 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:20:18 compute-0 podman[145130]: 2025-10-09 16:20:18.095229324 +0000 UTC m=+0.088119704 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.389 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Applying migration context for instance 2bff6d2b-fd02-44fa-89ec-e5699854e098 as it has an incoming, in-progress migration 3ea569d5-9ebf-45df-87ad-08b48f4c30a8. Migration status is running _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1046
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.389 2 DEBUG nova.objects.instance [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.481 2 DEBUG nova.virt.libvirt.vif [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=1,config_drive='True',created_at=2025-10-09T16:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteBasicStrategy-server-546735381',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutebasicstrategy-server-546735381',id=11,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:19:20Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8a3f79d5a5f2475d93599ef409043893',ramdisk_id='',reservation_id='r-0kaf3jgd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',clean_attempts='1',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteBasicStrategy-20506088',owner_user_name='tempest-TestExecuteBasicStrategy-20506088-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:20:12Z,user_data=None,user_id='2c30c20b7c364f81b40bcec56afc8ae3',uuid=2bff6d2b-fd02-44fa-89ec-e5699854e098,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88f83656-b34d-493c-8e34-263dae66e382", "address": "fa:16:3e:ba:30:d2", "network": {"id": "7a8574b4-b4f7-483c-8050-7281b5ac5624", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1906595087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4e80e48be374e249fbd628564fc7b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88f83656-b3", "ovs_interfaceid": "88f83656-b34d-493c-8e34-263dae66e382", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.481 2 DEBUG nova.network.os_vif_util [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Converting VIF {"id": "88f83656-b34d-493c-8e34-263dae66e382", "address": "fa:16:3e:ba:30:d2", "network": {"id": "7a8574b4-b4f7-483c-8050-7281b5ac5624", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1906595087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4e80e48be374e249fbd628564fc7b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88f83656-b3", "ovs_interfaceid": "88f83656-b34d-493c-8e34-263dae66e382", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.482 2 DEBUG nova.network.os_vif_util [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:30:d2,bridge_name='br-int',has_traffic_filtering=True,id=88f83656-b34d-493c-8e34-263dae66e382,network=Network(7a8574b4-b4f7-483c-8050-7281b5ac5624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88f83656-b3') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.482 2 DEBUG os_vif [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:30:d2,bridge_name='br-int',has_traffic_filtering=True,id=88f83656-b34d-493c-8e34-263dae66e382,network=Network(7a8574b4-b4f7-483c-8050-7281b5ac5624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88f83656-b3') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.484 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88f83656-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.487 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=0a513a68-f203-4379-906d-566bcc8a660e) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.491 2 INFO os_vif [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:30:d2,bridge_name='br-int',has_traffic_filtering=True,id=88f83656-b34d-493c-8e34-263dae66e382,network=Network(7a8574b4-b4f7-483c-8050-7281b5ac5624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88f83656-b3')
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.492 2 INFO nova.virt.libvirt.driver [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Deleting instance files /var/lib/nova/instances/2bff6d2b-fd02-44fa-89ec-e5699854e098_del
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.492 2 INFO nova.virt.libvirt.driver [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Deletion of /var/lib/nova/instances/2bff6d2b-fd02-44fa-89ec-e5699854e098_del complete
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.887 2 DEBUG nova.compute.manager [req-331e0f06-b9ca-46c1-b5a5-e64dfbe08046 req-6363586d-6334-454c-a2dc-ca3b7a2af261 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Received event network-vif-unplugged-88f83656-b34d-493c-8e34-263dae66e382 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.888 2 DEBUG oslo_concurrency.lockutils [req-331e0f06-b9ca-46c1-b5a5-e64dfbe08046 req-6363586d-6334-454c-a2dc-ca3b7a2af261 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "2bff6d2b-fd02-44fa-89ec-e5699854e098-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.888 2 DEBUG oslo_concurrency.lockutils [req-331e0f06-b9ca-46c1-b5a5-e64dfbe08046 req-6363586d-6334-454c-a2dc-ca3b7a2af261 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "2bff6d2b-fd02-44fa-89ec-e5699854e098-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.888 2 DEBUG oslo_concurrency.lockutils [req-331e0f06-b9ca-46c1-b5a5-e64dfbe08046 req-6363586d-6334-454c-a2dc-ca3b7a2af261 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "2bff6d2b-fd02-44fa-89ec-e5699854e098-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.888 2 DEBUG nova.compute.manager [req-331e0f06-b9ca-46c1-b5a5-e64dfbe08046 req-6363586d-6334-454c-a2dc-ca3b7a2af261 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] No waiting events found dispatching network-vif-unplugged-88f83656-b34d-493c-8e34-263dae66e382 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.889 2 DEBUG nova.compute.manager [req-331e0f06-b9ca-46c1-b5a5-e64dfbe08046 req-6363586d-6334-454c-a2dc-ca3b7a2af261 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Received event network-vif-unplugged-88f83656-b34d-493c-8e34-263dae66e382 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.897 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.920 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance cd624073-6d91-4e27-9050-656e729e250e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.921 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance 2bff6d2b-fd02-44fa-89ec-e5699854e098 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.921 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.922 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:20:17 up 29 min,  0 user,  load average: 0.51, 0.31, 0.29\n', 'num_instances': '2', 'num_vm_active': '2', 'num_task_None': '1', 'num_os_type_None': '2', 'num_proj_8a3f79d5a5f2475d93599ef409043893': '2', 'io_workload': '0', 'num_task_deleting': '1'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:20:18 compute-0 nova_compute[117331]: 2025-10-09 16:20:18.972 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:20:19 compute-0 nova_compute[117331]: 2025-10-09 16:20:19.004 2 INFO nova.compute.manager [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Took 1.29 seconds to destroy the instance on the hypervisor.
Oct 09 16:20:19 compute-0 nova_compute[117331]: 2025-10-09 16:20:19.005 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 09 16:20:19 compute-0 nova_compute[117331]: 2025-10-09 16:20:19.006 2 DEBUG nova.compute.manager [-] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 09 16:20:19 compute-0 nova_compute[117331]: 2025-10-09 16:20:19.006 2 DEBUG nova.network.neutron [-] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 09 16:20:19 compute-0 nova_compute[117331]: 2025-10-09 16:20:19.007 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:20:19 compute-0 nova_compute[117331]: 2025-10-09 16:20:19.255 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:20:19 compute-0 nova_compute[117331]: 2025-10-09 16:20:19.479 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:20:19 compute-0 nova_compute[117331]: 2025-10-09 16:20:19.738 2 DEBUG nova.compute.manager [req-1cad1993-5c4d-49ef-b6cf-57e191d520f0 req-afaad421-abcb-4951-b589-ec69f55f4288 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Received event network-vif-deleted-88f83656-b34d-493c-8e34-263dae66e382 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:20:19 compute-0 nova_compute[117331]: 2025-10-09 16:20:19.738 2 INFO nova.compute.manager [req-1cad1993-5c4d-49ef-b6cf-57e191d520f0 req-afaad421-abcb-4951-b589-ec69f55f4288 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Neutron deleted interface 88f83656-b34d-493c-8e34-263dae66e382; detaching it from the instance and deleting it from the info cache
Oct 09 16:20:19 compute-0 nova_compute[117331]: 2025-10-09 16:20:19.738 2 DEBUG nova.network.neutron [req-1cad1993-5c4d-49ef-b6cf-57e191d520f0 req-afaad421-abcb-4951-b589-ec69f55f4288 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:20:19 compute-0 nova_compute[117331]: 2025-10-09 16:20:19.989 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:20:19 compute-0 nova_compute[117331]: 2025-10-09 16:20:19.990 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.621s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:20:20 compute-0 nova_compute[117331]: 2025-10-09 16:20:20.181 2 DEBUG nova.network.neutron [-] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:20:20 compute-0 nova_compute[117331]: 2025-10-09 16:20:20.246 2 DEBUG nova.compute.manager [req-1cad1993-5c4d-49ef-b6cf-57e191d520f0 req-afaad421-abcb-4951-b589-ec69f55f4288 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Detach interface failed, port_id=88f83656-b34d-493c-8e34-263dae66e382, reason: Instance 2bff6d2b-fd02-44fa-89ec-e5699854e098 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 09 16:20:20 compute-0 nova_compute[117331]: 2025-10-09 16:20:20.688 2 INFO nova.compute.manager [-] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Took 1.68 seconds to deallocate network for instance.
Oct 09 16:20:20 compute-0 nova_compute[117331]: 2025-10-09 16:20:20.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:20 compute-0 nova_compute[117331]: 2025-10-09 16:20:20.945 2 DEBUG nova.compute.manager [req-add5621b-c6e2-4d34-b831-3a2c4678b135 req-fafd043e-d446-4190-9264-2cddd8352ad3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Received event network-vif-unplugged-88f83656-b34d-493c-8e34-263dae66e382 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:20:20 compute-0 nova_compute[117331]: 2025-10-09 16:20:20.945 2 DEBUG oslo_concurrency.lockutils [req-add5621b-c6e2-4d34-b831-3a2c4678b135 req-fafd043e-d446-4190-9264-2cddd8352ad3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "2bff6d2b-fd02-44fa-89ec-e5699854e098-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:20:20 compute-0 nova_compute[117331]: 2025-10-09 16:20:20.946 2 DEBUG oslo_concurrency.lockutils [req-add5621b-c6e2-4d34-b831-3a2c4678b135 req-fafd043e-d446-4190-9264-2cddd8352ad3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "2bff6d2b-fd02-44fa-89ec-e5699854e098-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:20:20 compute-0 nova_compute[117331]: 2025-10-09 16:20:20.946 2 DEBUG oslo_concurrency.lockutils [req-add5621b-c6e2-4d34-b831-3a2c4678b135 req-fafd043e-d446-4190-9264-2cddd8352ad3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "2bff6d2b-fd02-44fa-89ec-e5699854e098-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:20:20 compute-0 nova_compute[117331]: 2025-10-09 16:20:20.946 2 DEBUG nova.compute.manager [req-add5621b-c6e2-4d34-b831-3a2c4678b135 req-fafd043e-d446-4190-9264-2cddd8352ad3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] No waiting events found dispatching network-vif-unplugged-88f83656-b34d-493c-8e34-263dae66e382 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:20:20 compute-0 nova_compute[117331]: 2025-10-09 16:20:20.947 2 WARNING nova.compute.manager [req-add5621b-c6e2-4d34-b831-3a2c4678b135 req-fafd043e-d446-4190-9264-2cddd8352ad3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 2bff6d2b-fd02-44fa-89ec-e5699854e098] Received unexpected event network-vif-unplugged-88f83656-b34d-493c-8e34-263dae66e382 for instance with vm_state deleted and task_state None.
Oct 09 16:20:21 compute-0 nova_compute[117331]: 2025-10-09 16:20:21.208 2 DEBUG oslo_concurrency.lockutils [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:20:21 compute-0 nova_compute[117331]: 2025-10-09 16:20:21.209 2 DEBUG oslo_concurrency.lockutils [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:20:21 compute-0 nova_compute[117331]: 2025-10-09 16:20:21.254 2 DEBUG nova.compute.provider_tree [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:20:21 compute-0 nova_compute[117331]: 2025-10-09 16:20:21.762 2 DEBUG nova.scheduler.client.report [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:20:22 compute-0 nova_compute[117331]: 2025-10-09 16:20:22.276 2 DEBUG oslo_concurrency.lockutils [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.067s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:20:22 compute-0 nova_compute[117331]: 2025-10-09 16:20:22.297 2 INFO nova.scheduler.client.report [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Deleted allocations for instance 2bff6d2b-fd02-44fa-89ec-e5699854e098
Oct 09 16:20:23 compute-0 nova_compute[117331]: 2025-10-09 16:20:23.328 2 DEBUG oslo_concurrency.lockutils [None req-35de19e3-8c3c-41d3-92ef-422fce0e1b1c 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "2bff6d2b-fd02-44fa-89ec-e5699854e098" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.153s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:20:23 compute-0 nova_compute[117331]: 2025-10-09 16:20:23.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:23 compute-0 nova_compute[117331]: 2025-10-09 16:20:23.931 2 DEBUG oslo_concurrency.lockutils [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Acquiring lock "cd624073-6d91-4e27-9050-656e729e250e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:20:23 compute-0 nova_compute[117331]: 2025-10-09 16:20:23.931 2 DEBUG oslo_concurrency.lockutils [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "cd624073-6d91-4e27-9050-656e729e250e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:20:23 compute-0 nova_compute[117331]: 2025-10-09 16:20:23.932 2 DEBUG oslo_concurrency.lockutils [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Acquiring lock "cd624073-6d91-4e27-9050-656e729e250e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:20:23 compute-0 nova_compute[117331]: 2025-10-09 16:20:23.932 2 DEBUG oslo_concurrency.lockutils [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "cd624073-6d91-4e27-9050-656e729e250e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:20:23 compute-0 nova_compute[117331]: 2025-10-09 16:20:23.932 2 DEBUG oslo_concurrency.lockutils [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "cd624073-6d91-4e27-9050-656e729e250e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:20:23 compute-0 nova_compute[117331]: 2025-10-09 16:20:23.943 2 INFO nova.compute.manager [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Terminating instance
Oct 09 16:20:23 compute-0 nova_compute[117331]: 2025-10-09 16:20:23.980 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:20:23 compute-0 nova_compute[117331]: 2025-10-09 16:20:23.980 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:20:24 compute-0 nova_compute[117331]: 2025-10-09 16:20:24.460 2 DEBUG nova.compute.manager [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 09 16:20:24 compute-0 kernel: tapb0f5ccc9-26 (unregistering): left promiscuous mode
Oct 09 16:20:24 compute-0 NetworkManager[1028]: <info>  [1760026824.4891] device (tapb0f5ccc9-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:20:24 compute-0 nova_compute[117331]: 2025-10-09 16:20:24.489 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:20:24 compute-0 nova_compute[117331]: 2025-10-09 16:20:24.490 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:20:24 compute-0 nova_compute[117331]: 2025-10-09 16:20:24.490 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:20:24 compute-0 ovn_controller[19752]: 2025-10-09T16:20:24Z|00117|binding|INFO|Releasing lport b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 from this chassis (sb_readonly=0)
Oct 09 16:20:24 compute-0 nova_compute[117331]: 2025-10-09 16:20:24.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:24 compute-0 ovn_controller[19752]: 2025-10-09T16:20:24Z|00118|binding|INFO|Setting lport b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 down in Southbound
Oct 09 16:20:24 compute-0 ovn_controller[19752]: 2025-10-09T16:20:24Z|00119|binding|INFO|Removing iface tapb0f5ccc9-26 ovn-installed in OVS
Oct 09 16:20:24 compute-0 nova_compute[117331]: 2025-10-09 16:20:24.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:24.554 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:be:26 10.100.0.9'], port_security=['fa:16:3e:75:be:26 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'cd624073-6d91-4e27-9050-656e729e250e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a3f79d5a5f2475d93599ef409043893', 'neutron:revision_number': '5', 'neutron:security_group_ids': '3635f8cb-8906-4fc5-a724-24c5054b393c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba3ccdc6-51b8-4206-add5-95d4a6a3eef3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:20:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:24.556 28613 INFO neutron.agent.ovn.metadata.agent [-] Port b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 in datapath 7a8574b4-b4f7-483c-8050-7281b5ac5624 unbound from our chassis
Oct 09 16:20:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:24.558 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7a8574b4-b4f7-483c-8050-7281b5ac5624, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:20:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:24.558 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[c7888266-411d-4b72-b86a-a711bd64e08b]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:24.559 28613 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624 namespace which is not needed anymore
Oct 09 16:20:24 compute-0 nova_compute[117331]: 2025-10-09 16:20:24.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:24 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct 09 16:20:24 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000a.scope: Consumed 16.101s CPU time.
Oct 09 16:20:24 compute-0 systemd-machined[77487]: Machine qemu-6-instance-0000000a terminated.
Oct 09 16:20:24 compute-0 neutron-haproxy-ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624[144601]: [NOTICE]   (144607) : haproxy version is 3.0.5-8e879a5
Oct 09 16:20:24 compute-0 podman[145181]: 2025-10-09 16:20:24.709549939 +0000 UTC m=+0.033932161 container kill 0dbaada2507e05e6a9918da92344c04967a6156a3e8d31b25e3d94f057396eb7 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Oct 09 16:20:24 compute-0 neutron-haproxy-ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624[144601]: [NOTICE]   (144607) : path to executable is /usr/sbin/haproxy
Oct 09 16:20:24 compute-0 neutron-haproxy-ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624[144601]: [WARNING]  (144607) : Exiting Master process...
Oct 09 16:20:24 compute-0 neutron-haproxy-ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624[144601]: [ALERT]    (144607) : Current worker (144609) exited with code 143 (Terminated)
Oct 09 16:20:24 compute-0 neutron-haproxy-ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624[144601]: [WARNING]  (144607) : All workers exited. Exiting... (0)
Oct 09 16:20:24 compute-0 systemd[1]: libpod-0dbaada2507e05e6a9918da92344c04967a6156a3e8d31b25e3d94f057396eb7.scope: Deactivated successfully.
Oct 09 16:20:24 compute-0 conmon[144601]: conmon 0dbaada2507e05e6a991 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0dbaada2507e05e6a9918da92344c04967a6156a3e8d31b25e3d94f057396eb7.scope/container/memory.events
Oct 09 16:20:24 compute-0 nova_compute[117331]: 2025-10-09 16:20:24.719 2 INFO nova.virt.libvirt.driver [-] [instance: cd624073-6d91-4e27-9050-656e729e250e] Instance destroyed successfully.
Oct 09 16:20:24 compute-0 nova_compute[117331]: 2025-10-09 16:20:24.720 2 DEBUG nova.objects.instance [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lazy-loading 'resources' on Instance uuid cd624073-6d91-4e27-9050-656e729e250e obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:20:24 compute-0 podman[145211]: 2025-10-09 16:20:24.774880549 +0000 UTC m=+0.033974553 container died 0dbaada2507e05e6a9918da92344c04967a6156a3e8d31b25e3d94f057396eb7 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c994c2faaac678d0d9fdd1d853633ee68ad20653084e5b4f04bb5f655d8447d-merged.mount: Deactivated successfully.
Oct 09 16:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0dbaada2507e05e6a9918da92344c04967a6156a3e8d31b25e3d94f057396eb7-userdata-shm.mount: Deactivated successfully.
Oct 09 16:20:24 compute-0 podman[145211]: 2025-10-09 16:20:24.811508715 +0000 UTC m=+0.070602709 container cleanup 0dbaada2507e05e6a9918da92344c04967a6156a3e8d31b25e3d94f057396eb7 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 09 16:20:24 compute-0 systemd[1]: libpod-conmon-0dbaada2507e05e6a9918da92344c04967a6156a3e8d31b25e3d94f057396eb7.scope: Deactivated successfully.
Oct 09 16:20:24 compute-0 podman[145213]: 2025-10-09 16:20:24.839313079 +0000 UTC m=+0.091980578 container remove 0dbaada2507e05e6a9918da92344c04967a6156a3e8d31b25e3d94f057396eb7 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS)
Oct 09 16:20:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:24.846 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[994b8a14-8fa5-470b-8f78-b647c4d55433]: (4, ("Thu Oct  9 04:20:24 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624 (0dbaada2507e05e6a9918da92344c04967a6156a3e8d31b25e3d94f057396eb7)\n0dbaada2507e05e6a9918da92344c04967a6156a3e8d31b25e3d94f057396eb7\nThu Oct  9 04:20:24 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624 (0dbaada2507e05e6a9918da92344c04967a6156a3e8d31b25e3d94f057396eb7)\n0dbaada2507e05e6a9918da92344c04967a6156a3e8d31b25e3d94f057396eb7\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:24.848 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5cdc55-40c0-43bb-bbf0-6e39cac4ccf5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:24.848 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7a8574b4-b4f7-483c-8050-7281b5ac5624.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7a8574b4-b4f7-483c-8050-7281b5ac5624.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:20:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:24.849 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d12c2f-4555-4770-8847-2af88571a4b0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:24.850 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a8574b4-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:20:24 compute-0 nova_compute[117331]: 2025-10-09 16:20:24.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:24 compute-0 kernel: tap7a8574b4-b0: left promiscuous mode
Oct 09 16:20:24 compute-0 nova_compute[117331]: 2025-10-09 16:20:24.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:24 compute-0 nova_compute[117331]: 2025-10-09 16:20:24.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:24.874 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[c5bcbb38-e165-49a6-9b8a-3c71f4b5e4b4]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:24.901 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[b87db51d-2c59-46aa-8652-8a28b6642014]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:24.903 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[b00db0e9-96bb-4982-90f3-9ba0f1904952]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:24 compute-0 nova_compute[117331]: 2025-10-09 16:20:24.912 2 DEBUG nova.compute.manager [req-33fc466f-2270-4f76-b814-4209d676c40e req-0cfde397-3479-4145-81e9-4dd3e2a321d1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Received event network-vif-unplugged-b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:20:24 compute-0 nova_compute[117331]: 2025-10-09 16:20:24.912 2 DEBUG oslo_concurrency.lockutils [req-33fc466f-2270-4f76-b814-4209d676c40e req-0cfde397-3479-4145-81e9-4dd3e2a321d1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "cd624073-6d91-4e27-9050-656e729e250e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:20:24 compute-0 nova_compute[117331]: 2025-10-09 16:20:24.913 2 DEBUG oslo_concurrency.lockutils [req-33fc466f-2270-4f76-b814-4209d676c40e req-0cfde397-3479-4145-81e9-4dd3e2a321d1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "cd624073-6d91-4e27-9050-656e729e250e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:20:24 compute-0 nova_compute[117331]: 2025-10-09 16:20:24.914 2 DEBUG oslo_concurrency.lockutils [req-33fc466f-2270-4f76-b814-4209d676c40e req-0cfde397-3479-4145-81e9-4dd3e2a321d1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "cd624073-6d91-4e27-9050-656e729e250e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:20:24 compute-0 nova_compute[117331]: 2025-10-09 16:20:24.914 2 DEBUG nova.compute.manager [req-33fc466f-2270-4f76-b814-4209d676c40e req-0cfde397-3479-4145-81e9-4dd3e2a321d1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] No waiting events found dispatching network-vif-unplugged-b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:20:24 compute-0 nova_compute[117331]: 2025-10-09 16:20:24.915 2 DEBUG nova.compute.manager [req-33fc466f-2270-4f76-b814-4209d676c40e req-0cfde397-3479-4145-81e9-4dd3e2a321d1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Received event network-vif-unplugged-b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:20:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:24.923 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[36d0048c-ea39-4f65-8301-0e53e2332ea8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 168177, 'reachable_time': 16533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 145252, 'error': None, 'target': 'ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d7a8574b4\x2db4f7\x2d483c\x2d8050\x2d7281b5ac5624.mount: Deactivated successfully.
Oct 09 16:20:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:24.931 28727 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7a8574b4-b4f7-483c-8050-7281b5ac5624 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 09 16:20:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:24.933 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[99545ade-a97c-4cf0-b7ea-46b61895c532]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.227 2 DEBUG nova.virt.libvirt.vif [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:18:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteBasicStrategy-server-1593369519',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutebasicstrategy-server-1593369519',id=10,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:19:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8a3f79d5a5f2475d93599ef409043893',ramdisk_id='',reservation_id='r-hwjq090r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteBasicStrategy-20506088',owner_user_name='tempest-TestExecuteBasicStrategy-20506088-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:19:00Z,user_data=None,user_id='2c30c20b7c364f81b40bcec56afc8ae3',uuid=cd624073-6d91-4e27-9050-656e729e250e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "address": "fa:16:3e:75:be:26", "network": {"id": "7a8574b4-b4f7-483c-8050-7281b5ac5624", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1906595087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4e80e48be374e249fbd628564fc7b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0f5ccc9-26", "ovs_interfaceid": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.227 2 DEBUG nova.network.os_vif_util [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Converting VIF {"id": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "address": "fa:16:3e:75:be:26", "network": {"id": "7a8574b4-b4f7-483c-8050-7281b5ac5624", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1906595087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4e80e48be374e249fbd628564fc7b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0f5ccc9-26", "ovs_interfaceid": "b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.228 2 DEBUG nova.network.os_vif_util [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:be:26,bridge_name='br-int',has_traffic_filtering=True,id=b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99,network=Network(7a8574b4-b4f7-483c-8050-7281b5ac5624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0f5ccc9-26') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.229 2 DEBUG os_vif [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:be:26,bridge_name='br-int',has_traffic_filtering=True,id=b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99,network=Network(7a8574b4-b4f7-483c-8050-7281b5ac5624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0f5ccc9-26') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.231 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0f5ccc9-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=b2e17c28-c473-4116-ba7e-8fdbe2f572c8) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.240 2 INFO os_vif [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:be:26,bridge_name='br-int',has_traffic_filtering=True,id=b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99,network=Network(7a8574b4-b4f7-483c-8050-7281b5ac5624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0f5ccc9-26')
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.240 2 INFO nova.virt.libvirt.driver [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Deleting instance files /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e_del
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.241 2 INFO nova.virt.libvirt.driver [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Deletion of /var/lib/nova/instances/cd624073-6d91-4e27-9050-656e729e250e_del complete
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.755 2 INFO nova.compute.manager [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Took 1.29 seconds to destroy the instance on the hypervisor.
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.756 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.756 2 DEBUG nova.compute.manager [-] [instance: cd624073-6d91-4e27-9050-656e729e250e] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.757 2 DEBUG nova.network.neutron [-] [instance: cd624073-6d91-4e27-9050-656e729e250e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.757 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:20:25 compute-0 nova_compute[117331]: 2025-10-09 16:20:25.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:26 compute-0 nova_compute[117331]: 2025-10-09 16:20:26.281 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:20:26 compute-0 nova_compute[117331]: 2025-10-09 16:20:26.618 2 DEBUG nova.compute.manager [req-41c2e52b-ce64-48fc-95f4-9c211e6d2083 req-d7b809da-eef8-4990-9eba-4f5e740a98ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Received event network-vif-deleted-b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:20:26 compute-0 nova_compute[117331]: 2025-10-09 16:20:26.618 2 INFO nova.compute.manager [req-41c2e52b-ce64-48fc-95f4-9c211e6d2083 req-d7b809da-eef8-4990-9eba-4f5e740a98ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Neutron deleted interface b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99; detaching it from the instance and deleting it from the info cache
Oct 09 16:20:26 compute-0 nova_compute[117331]: 2025-10-09 16:20:26.619 2 DEBUG nova.network.neutron [req-41c2e52b-ce64-48fc-95f4-9c211e6d2083 req-d7b809da-eef8-4990-9eba-4f5e740a98ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:20:26 compute-0 nova_compute[117331]: 2025-10-09 16:20:26.956 2 DEBUG nova.compute.manager [req-2ef831d9-4cfb-4328-843d-1b5b0fa9b779 req-7d63fc1f-8b1b-47dc-84d3-4cc6ab2db0e4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Received event network-vif-unplugged-b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:20:26 compute-0 nova_compute[117331]: 2025-10-09 16:20:26.957 2 DEBUG oslo_concurrency.lockutils [req-2ef831d9-4cfb-4328-843d-1b5b0fa9b779 req-7d63fc1f-8b1b-47dc-84d3-4cc6ab2db0e4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "cd624073-6d91-4e27-9050-656e729e250e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:20:26 compute-0 nova_compute[117331]: 2025-10-09 16:20:26.957 2 DEBUG oslo_concurrency.lockutils [req-2ef831d9-4cfb-4328-843d-1b5b0fa9b779 req-7d63fc1f-8b1b-47dc-84d3-4cc6ab2db0e4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "cd624073-6d91-4e27-9050-656e729e250e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:20:26 compute-0 nova_compute[117331]: 2025-10-09 16:20:26.958 2 DEBUG oslo_concurrency.lockutils [req-2ef831d9-4cfb-4328-843d-1b5b0fa9b779 req-7d63fc1f-8b1b-47dc-84d3-4cc6ab2db0e4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "cd624073-6d91-4e27-9050-656e729e250e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:20:26 compute-0 nova_compute[117331]: 2025-10-09 16:20:26.958 2 DEBUG nova.compute.manager [req-2ef831d9-4cfb-4328-843d-1b5b0fa9b779 req-7d63fc1f-8b1b-47dc-84d3-4cc6ab2db0e4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] No waiting events found dispatching network-vif-unplugged-b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:20:26 compute-0 nova_compute[117331]: 2025-10-09 16:20:26.958 2 DEBUG nova.compute.manager [req-2ef831d9-4cfb-4328-843d-1b5b0fa9b779 req-7d63fc1f-8b1b-47dc-84d3-4cc6ab2db0e4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Received event network-vif-unplugged-b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:20:27 compute-0 nova_compute[117331]: 2025-10-09 16:20:27.068 2 DEBUG nova.network.neutron [-] [instance: cd624073-6d91-4e27-9050-656e729e250e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:20:27 compute-0 nova_compute[117331]: 2025-10-09 16:20:27.127 2 DEBUG nova.compute.manager [req-41c2e52b-ce64-48fc-95f4-9c211e6d2083 req-d7b809da-eef8-4990-9eba-4f5e740a98ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: cd624073-6d91-4e27-9050-656e729e250e] Detach interface failed, port_id=b0f5ccc9-26bf-47c1-be9c-23a53bcb2d99, reason: Instance cd624073-6d91-4e27-9050-656e729e250e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 09 16:20:27 compute-0 nova_compute[117331]: 2025-10-09 16:20:27.577 2 INFO nova.compute.manager [-] [instance: cd624073-6d91-4e27-9050-656e729e250e] Took 1.82 seconds to deallocate network for instance.
Oct 09 16:20:28 compute-0 nova_compute[117331]: 2025-10-09 16:20:28.097 2 DEBUG oslo_concurrency.lockutils [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:20:28 compute-0 nova_compute[117331]: 2025-10-09 16:20:28.098 2 DEBUG oslo_concurrency.lockutils [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:20:28 compute-0 nova_compute[117331]: 2025-10-09 16:20:28.143 2 DEBUG nova.compute.provider_tree [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:20:28 compute-0 nova_compute[117331]: 2025-10-09 16:20:28.653 2 DEBUG nova.scheduler.client.report [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:20:28 compute-0 podman[145253]: 2025-10-09 16:20:28.853056061 +0000 UTC m=+0.072007932 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2)
Oct 09 16:20:29 compute-0 nova_compute[117331]: 2025-10-09 16:20:29.164 2 DEBUG oslo_concurrency.lockutils [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.067s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:20:29 compute-0 nova_compute[117331]: 2025-10-09 16:20:29.199 2 INFO nova.scheduler.client.report [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Deleted allocations for instance cd624073-6d91-4e27-9050-656e729e250e
Oct 09 16:20:29 compute-0 podman[127775]: time="2025-10-09T16:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:20:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:20:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3021 "" "Go-http-client/1.1"
Oct 09 16:20:30 compute-0 nova_compute[117331]: 2025-10-09 16:20:30.233 2 DEBUG oslo_concurrency.lockutils [None req-882bd272-3c7e-429d-9bb3-b998026a865b 2c30c20b7c364f81b40bcec56afc8ae3 8a3f79d5a5f2475d93599ef409043893 - - default default] Lock "cd624073-6d91-4e27-9050-656e729e250e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.302s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:20:30 compute-0 nova_compute[117331]: 2025-10-09 16:20:30.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:30 compute-0 nova_compute[117331]: 2025-10-09 16:20:30.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:31 compute-0 openstack_network_exporter[129925]: ERROR   16:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:20:31 compute-0 openstack_network_exporter[129925]: ERROR   16:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:20:31 compute-0 openstack_network_exporter[129925]: ERROR   16:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:20:31 compute-0 openstack_network_exporter[129925]: ERROR   16:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:20:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:20:31 compute-0 openstack_network_exporter[129925]: ERROR   16:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:20:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:20:32 compute-0 podman[145273]: 2025-10-09 16:20:32.828839005 +0000 UTC m=+0.060553528 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 09 16:20:35 compute-0 nova_compute[117331]: 2025-10-09 16:20:35.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:35.304 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:20:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:35.305 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:20:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:35.305 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:20:35 compute-0 nova_compute[117331]: 2025-10-09 16:20:35.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:36 compute-0 nova_compute[117331]: 2025-10-09 16:20:36.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:37 compute-0 podman[145300]: 2025-10-09 16:20:37.842855465 +0000 UTC m=+0.069560644 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:20:37 compute-0 podman[145301]: 2025-10-09 16:20:37.879313546 +0000 UTC m=+0.101081759 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=iscsid, managed_by=edpm_ansible)
Oct 09 16:20:40 compute-0 nova_compute[117331]: 2025-10-09 16:20:40.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:40 compute-0 nova_compute[117331]: 2025-10-09 16:20:40.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:45 compute-0 nova_compute[117331]: 2025-10-09 16:20:45.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:45 compute-0 nova_compute[117331]: 2025-10-09 16:20:45.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:46 compute-0 podman[145340]: 2025-10-09 16:20:46.817552725 +0000 UTC m=+0.053399889 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, version=9.6, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Oct 09 16:20:48 compute-0 podman[145361]: 2025-10-09 16:20:48.873354899 +0000 UTC m=+0.103266097 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 09 16:20:49 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:49.869 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:77:73 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-50abc6e6-d87d-421f-9af6-73367fddd7b7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50abc6e6-d87d-421f-9af6-73367fddd7b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c0d4a6b1298940a88cc64a7b4527e70a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=258b9110-9f29-40f9-b725-b43e91c1f000, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=dccdf668-2b8a-499c-a2a2-204cde48f8f8) old=Port_Binding(mac=['fa:16:3e:ae:77:73'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-50abc6e6-d87d-421f-9af6-73367fddd7b7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50abc6e6-d87d-421f-9af6-73367fddd7b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c0d4a6b1298940a88cc64a7b4527e70a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:20:49 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:49.870 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port dccdf668-2b8a-499c-a2a2-204cde48f8f8 in datapath 50abc6e6-d87d-421f-9af6-73367fddd7b7 updated
Oct 09 16:20:49 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:49.872 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50abc6e6-d87d-421f-9af6-73367fddd7b7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:20:49 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:49.874 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a52ca643-c05b-4e46-9166-be27f8d04d06]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:50 compute-0 nova_compute[117331]: 2025-10-09 16:20:50.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:50 compute-0 nova_compute[117331]: 2025-10-09 16:20:50.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:55 compute-0 nova_compute[117331]: 2025-10-09 16:20:55.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:55 compute-0 nova_compute[117331]: 2025-10-09 16:20:55.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:56 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:56.070 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:4b:c1 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-93fd6d6c-9399-48be-b6bd-69500ba38e91', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93fd6d6c-9399-48be-b6bd-69500ba38e91', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'baa0510b36ae41b9843589f7cce65fc3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c652b3e3-fa0d-497b-be47-3243153df30f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7696760a-47be-40d0-9b8c-dcfba2ca1d0f) old=Port_Binding(mac=['fa:16:3e:04:4b:c1'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-93fd6d6c-9399-48be-b6bd-69500ba38e91', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93fd6d6c-9399-48be-b6bd-69500ba38e91', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'baa0510b36ae41b9843589f7cce65fc3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:20:56 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:56.071 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7696760a-47be-40d0-9b8c-dcfba2ca1d0f in datapath 93fd6d6c-9399-48be-b6bd-69500ba38e91 updated
Oct 09 16:20:56 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:56.074 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 93fd6d6c-9399-48be-b6bd-69500ba38e91, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:20:56 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:56.075 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[595931f3-1ed5-4823-870b-78451d4bd69b]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:20:59 compute-0 nova_compute[117331]: 2025-10-09 16:20:59.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:20:59 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:59.185 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:20:59 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:20:59.186 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:20:59 compute-0 podman[127775]: time="2025-10-09T16:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:20:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:20:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3019 "" "Go-http-client/1.1"
Oct 09 16:20:59 compute-0 podman[145388]: 2025-10-09 16:20:59.858851792 +0000 UTC m=+0.077555410 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:21:00 compute-0 nova_compute[117331]: 2025-10-09 16:21:00.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:00 compute-0 nova_compute[117331]: 2025-10-09 16:21:00.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:01 compute-0 openstack_network_exporter[129925]: ERROR   16:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:21:01 compute-0 openstack_network_exporter[129925]: ERROR   16:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:21:01 compute-0 openstack_network_exporter[129925]: ERROR   16:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:21:01 compute-0 openstack_network_exporter[129925]: ERROR   16:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:21:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:21:01 compute-0 openstack_network_exporter[129925]: ERROR   16:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:21:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:21:03 compute-0 podman[145409]: 2025-10-09 16:21:03.847076672 +0000 UTC m=+0.067992595 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 09 16:21:04 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:04.189 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:21:05 compute-0 nova_compute[117331]: 2025-10-09 16:21:05.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:05 compute-0 nova_compute[117331]: 2025-10-09 16:21:05.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:08 compute-0 podman[145433]: 2025-10-09 16:21:08.835347552 +0000 UTC m=+0.066141366 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 09 16:21:08 compute-0 podman[145434]: 2025-10-09 16:21:08.861790404 +0000 UTC m=+0.080507664 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 09 16:21:10 compute-0 nova_compute[117331]: 2025-10-09 16:21:10.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:10 compute-0 nova_compute[117331]: 2025-10-09 16:21:10.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:11 compute-0 ovn_controller[19752]: 2025-10-09T16:21:11Z|00120|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Oct 09 16:21:14 compute-0 nova_compute[117331]: 2025-10-09 16:21:14.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:21:15 compute-0 nova_compute[117331]: 2025-10-09 16:21:15.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:21:15 compute-0 nova_compute[117331]: 2025-10-09 16:21:15.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:21:15 compute-0 nova_compute[117331]: 2025-10-09 16:21:15.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:21:15 compute-0 nova_compute[117331]: 2025-10-09 16:21:15.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:15 compute-0 nova_compute[117331]: 2025-10-09 16:21:15.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:17 compute-0 nova_compute[117331]: 2025-10-09 16:21:17.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:21:17 compute-0 nova_compute[117331]: 2025-10-09 16:21:17.819 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:21:17 compute-0 nova_compute[117331]: 2025-10-09 16:21:17.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:21:17 compute-0 nova_compute[117331]: 2025-10-09 16:21:17.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:21:17 compute-0 nova_compute[117331]: 2025-10-09 16:21:17.820 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:21:17 compute-0 podman[145470]: 2025-10-09 16:21:17.828085468 +0000 UTC m=+0.061862980 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350)
Oct 09 16:21:17 compute-0 nova_compute[117331]: 2025-10-09 16:21:17.957 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:21:17 compute-0 nova_compute[117331]: 2025-10-09 16:21:17.958 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:21:17 compute-0 nova_compute[117331]: 2025-10-09 16:21:17.980 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.021s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:21:17 compute-0 nova_compute[117331]: 2025-10-09 16:21:17.980 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6185MB free_disk=73.26224136352539GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:21:17 compute-0 nova_compute[117331]: 2025-10-09 16:21:17.980 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:21:17 compute-0 nova_compute[117331]: 2025-10-09 16:21:17.981 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:21:19 compute-0 nova_compute[117331]: 2025-10-09 16:21:19.030 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:21:19 compute-0 nova_compute[117331]: 2025-10-09 16:21:19.031 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:21:17 up 30 min,  0 user,  load average: 0.28, 0.28, 0.28\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:21:19 compute-0 nova_compute[117331]: 2025-10-09 16:21:19.049 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing inventories for resource provider 593051b8-2000-437f-a915-2616fc8b1671 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 09 16:21:19 compute-0 nova_compute[117331]: 2025-10-09 16:21:19.067 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating ProviderTree inventory for provider 593051b8-2000-437f-a915-2616fc8b1671 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 09 16:21:19 compute-0 nova_compute[117331]: 2025-10-09 16:21:19.068 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating inventory in ProviderTree for provider 593051b8-2000-437f-a915-2616fc8b1671 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 09 16:21:19 compute-0 nova_compute[117331]: 2025-10-09 16:21:19.085 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing aggregate associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 09 16:21:19 compute-0 nova_compute[117331]: 2025-10-09 16:21:19.117 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing trait associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, traits: HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ARCH_X86_64,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_TIS,HW_ARCH_X86_64,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOUND_MODEL_AC97,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_RESCUE_BFV,COMPUTE_SOUND_MODEL_SB16,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 09 16:21:19 compute-0 nova_compute[117331]: 2025-10-09 16:21:19.140 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:21:19 compute-0 nova_compute[117331]: 2025-10-09 16:21:19.649 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:21:19 compute-0 podman[145493]: 2025-10-09 16:21:19.906581172 +0000 UTC m=+0.132433086 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, managed_by=edpm_ansible)
Oct 09 16:21:20 compute-0 nova_compute[117331]: 2025-10-09 16:21:20.158 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:21:20 compute-0 nova_compute[117331]: 2025-10-09 16:21:20.158 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.177s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:21:20 compute-0 nova_compute[117331]: 2025-10-09 16:21:20.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:20 compute-0 nova_compute[117331]: 2025-10-09 16:21:20.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:23 compute-0 nova_compute[117331]: 2025-10-09 16:21:23.159 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:21:23 compute-0 nova_compute[117331]: 2025-10-09 16:21:23.159 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:21:23 compute-0 nova_compute[117331]: 2025-10-09 16:21:23.160 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:21:23 compute-0 nova_compute[117331]: 2025-10-09 16:21:23.160 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:21:25 compute-0 nova_compute[117331]: 2025-10-09 16:21:25.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:25 compute-0 nova_compute[117331]: 2025-10-09 16:21:25.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:25 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:25.852 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:fa:7f 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c78b28f7-c251-4d74-863e-8d5520acbae0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95a019f93e534ec0ad7dca9e44d00556', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8cdf3869-9618-417b-be95-470341634549, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ba8bb1df-8aa6-40e9-83d3-fc3264c1941e) old=Port_Binding(mac=['fa:16:3e:11:fa:7f'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c78b28f7-c251-4d74-863e-8d5520acbae0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95a019f93e534ec0ad7dca9e44d00556', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:21:25 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:25.854 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ba8bb1df-8aa6-40e9-83d3-fc3264c1941e in datapath c78b28f7-c251-4d74-863e-8d5520acbae0 updated
Oct 09 16:21:25 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:25.854 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c78b28f7-c251-4d74-863e-8d5520acbae0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:21:25 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:25.855 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[de3b1196-f63b-4fc6-846f-2f4d56a9966f]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:29 compute-0 podman[127775]: time="2025-10-09T16:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:21:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:21:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3022 "" "Go-http-client/1.1"
Oct 09 16:21:30 compute-0 nova_compute[117331]: 2025-10-09 16:21:30.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:30 compute-0 nova_compute[117331]: 2025-10-09 16:21:30.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:30 compute-0 podman[145519]: 2025-10-09 16:21:30.844388438 +0000 UTC m=+0.070492525 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Oct 09 16:21:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:31.382 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:98:8c 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8542aca8-9d18-4de6-95b5-c5f05cbba9b9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8542aca8-9d18-4de6-95b5-c5f05cbba9b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c47758289b4c448c95f927a91d89e09f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b963051-dc32-4cc1-ab68-132bbbfb0453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=19bc55b5-594a-4314-b9f5-08c006b74ed1) old=Port_Binding(mac=['fa:16:3e:fa:98:8c'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-8542aca8-9d18-4de6-95b5-c5f05cbba9b9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8542aca8-9d18-4de6-95b5-c5f05cbba9b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c47758289b4c448c95f927a91d89e09f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:21:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:31.383 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 19bc55b5-594a-4314-b9f5-08c006b74ed1 in datapath 8542aca8-9d18-4de6-95b5-c5f05cbba9b9 updated
Oct 09 16:21:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:31.385 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8542aca8-9d18-4de6-95b5-c5f05cbba9b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:21:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:31.386 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[95092f4d-f479-40b9-87c7-cf5d7045e1ea]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:31 compute-0 openstack_network_exporter[129925]: ERROR   16:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:21:31 compute-0 openstack_network_exporter[129925]: ERROR   16:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:21:31 compute-0 openstack_network_exporter[129925]: ERROR   16:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:21:31 compute-0 openstack_network_exporter[129925]: ERROR   16:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:21:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:21:31 compute-0 openstack_network_exporter[129925]: ERROR   16:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:21:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:21:34 compute-0 podman[145539]: 2025-10-09 16:21:34.839249898 +0000 UTC m=+0.066581480 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 09 16:21:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:35.306 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:21:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:35.306 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:21:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:35.306 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:21:35 compute-0 nova_compute[117331]: 2025-10-09 16:21:35.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:35 compute-0 nova_compute[117331]: 2025-10-09 16:21:35.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:39 compute-0 podman[145565]: 2025-10-09 16:21:39.842003029 +0000 UTC m=+0.069792483 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 09 16:21:39 compute-0 podman[145566]: 2025-10-09 16:21:39.857661778 +0000 UTC m=+0.076951341 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest)
Oct 09 16:21:39 compute-0 nova_compute[117331]: 2025-10-09 16:21:39.891 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Acquiring lock "4537359c-cc98-4f5e-87c4-2410f96f0e44" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:21:39 compute-0 nova_compute[117331]: 2025-10-09 16:21:39.891 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:21:40 compute-0 nova_compute[117331]: 2025-10-09 16:21:40.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:40 compute-0 nova_compute[117331]: 2025-10-09 16:21:40.396 2 DEBUG nova.compute.manager [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:21:40 compute-0 nova_compute[117331]: 2025-10-09 16:21:40.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:40 compute-0 nova_compute[117331]: 2025-10-09 16:21:40.952 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:21:40 compute-0 nova_compute[117331]: 2025-10-09 16:21:40.953 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:21:40 compute-0 nova_compute[117331]: 2025-10-09 16:21:40.961 2 DEBUG nova.virt.hardware [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:21:40 compute-0 nova_compute[117331]: 2025-10-09 16:21:40.961 2 INFO nova.compute.claims [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:21:42 compute-0 nova_compute[117331]: 2025-10-09 16:21:42.016 2 DEBUG nova.compute.provider_tree [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:21:42 compute-0 nova_compute[117331]: 2025-10-09 16:21:42.524 2 DEBUG nova.scheduler.client.report [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:21:43 compute-0 nova_compute[117331]: 2025-10-09 16:21:43.035 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.082s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:21:43 compute-0 nova_compute[117331]: 2025-10-09 16:21:43.036 2 DEBUG nova.compute.manager [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:21:43 compute-0 nova_compute[117331]: 2025-10-09 16:21:43.548 2 DEBUG nova.compute.manager [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:21:43 compute-0 nova_compute[117331]: 2025-10-09 16:21:43.549 2 DEBUG nova.network.neutron [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:21:43 compute-0 nova_compute[117331]: 2025-10-09 16:21:43.549 2 WARNING neutronclient.v2_0.client [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:21:43 compute-0 nova_compute[117331]: 2025-10-09 16:21:43.550 2 WARNING neutronclient.v2_0.client [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:21:44 compute-0 nova_compute[117331]: 2025-10-09 16:21:44.057 2 INFO nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:21:44 compute-0 nova_compute[117331]: 2025-10-09 16:21:44.313 2 DEBUG nova.network.neutron [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Successfully created port: ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:21:44 compute-0 nova_compute[117331]: 2025-10-09 16:21:44.568 2 DEBUG nova.compute.manager [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.384 2 DEBUG nova.network.neutron [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Successfully updated port: ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.458 2 DEBUG nova.compute.manager [req-c3ecb134-82d0-4c54-828d-b4c19a1fbded req-722c53eb-ddd3-4d19-a003-9ca7975511ec ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received event network-changed-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.459 2 DEBUG nova.compute.manager [req-c3ecb134-82d0-4c54-828d-b4c19a1fbded req-722c53eb-ddd3-4d19-a003-9ca7975511ec ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Refreshing instance network info cache due to event network-changed-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.460 2 DEBUG oslo_concurrency.lockutils [req-c3ecb134-82d0-4c54-828d-b4c19a1fbded req-722c53eb-ddd3-4d19-a003-9ca7975511ec ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-4537359c-cc98-4f5e-87c4-2410f96f0e44" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.460 2 DEBUG oslo_concurrency.lockutils [req-c3ecb134-82d0-4c54-828d-b4c19a1fbded req-722c53eb-ddd3-4d19-a003-9ca7975511ec ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-4537359c-cc98-4f5e-87c4-2410f96f0e44" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.460 2 DEBUG nova.network.neutron [req-c3ecb134-82d0-4c54-828d-b4c19a1fbded req-722c53eb-ddd3-4d19-a003-9ca7975511ec ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Refreshing network info cache for port ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.586 2 DEBUG nova.compute.manager [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.587 2 DEBUG nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.588 2 INFO nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Creating image(s)
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.588 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Acquiring lock "/var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.588 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "/var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.589 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "/var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.590 2 DEBUG oslo_utils.imageutils.format_inspector [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.593 2 DEBUG oslo_utils.imageutils.format_inspector [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.595 2 DEBUG oslo_concurrency.processutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.649 2 DEBUG oslo_concurrency.processutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.649 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.650 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.650 2 DEBUG oslo_utils.imageutils.format_inspector [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.653 2 DEBUG oslo_utils.imageutils.format_inspector [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.654 2 DEBUG oslo_concurrency.processutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.701 2 DEBUG oslo_concurrency.processutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.702 2 DEBUG oslo_concurrency.processutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.733 2 DEBUG oslo_concurrency.processutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk 1073741824" returned: 0 in 0.031s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.734 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.084s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.734 2 DEBUG oslo_concurrency.processutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.783 2 DEBUG oslo_concurrency.processutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.784 2 DEBUG nova.virt.disk.api [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Checking if we can resize image /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.784 2 DEBUG oslo_concurrency.processutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.835 2 DEBUG oslo_concurrency.processutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.835 2 DEBUG nova.virt.disk.api [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Cannot resize image /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.836 2 DEBUG nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.836 2 DEBUG nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Ensure instance console log exists: /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.837 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.837 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.837 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.892 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Acquiring lock "refresh_cache-4537359c-cc98-4f5e-87c4-2410f96f0e44" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:21:45 compute-0 nova_compute[117331]: 2025-10-09 16:21:45.967 2 WARNING neutronclient.v2_0.client [req-c3ecb134-82d0-4c54-828d-b4c19a1fbded req-722c53eb-ddd3-4d19-a003-9ca7975511ec ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:21:46 compute-0 nova_compute[117331]: 2025-10-09 16:21:46.278 2 DEBUG nova.network.neutron [req-c3ecb134-82d0-4c54-828d-b4c19a1fbded req-722c53eb-ddd3-4d19-a003-9ca7975511ec ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:21:46 compute-0 nova_compute[117331]: 2025-10-09 16:21:46.445 2 DEBUG nova.network.neutron [req-c3ecb134-82d0-4c54-828d-b4c19a1fbded req-722c53eb-ddd3-4d19-a003-9ca7975511ec ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:21:46 compute-0 nova_compute[117331]: 2025-10-09 16:21:46.951 2 DEBUG oslo_concurrency.lockutils [req-c3ecb134-82d0-4c54-828d-b4c19a1fbded req-722c53eb-ddd3-4d19-a003-9ca7975511ec ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-4537359c-cc98-4f5e-87c4-2410f96f0e44" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:21:46 compute-0 nova_compute[117331]: 2025-10-09 16:21:46.952 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Acquired lock "refresh_cache-4537359c-cc98-4f5e-87c4-2410f96f0e44" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:21:46 compute-0 nova_compute[117331]: 2025-10-09 16:21:46.953 2 DEBUG nova.network.neutron [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:21:47 compute-0 nova_compute[117331]: 2025-10-09 16:21:47.638 2 DEBUG nova.network.neutron [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:21:47 compute-0 nova_compute[117331]: 2025-10-09 16:21:47.824 2 WARNING neutronclient.v2_0.client [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:21:47 compute-0 nova_compute[117331]: 2025-10-09 16:21:47.982 2 DEBUG nova.network.neutron [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Updating instance_info_cache with network_info: [{"id": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "address": "fa:16:3e:cc:3c:4d", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5bf45b-d6", "ovs_interfaceid": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.489 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Releasing lock "refresh_cache-4537359c-cc98-4f5e-87c4-2410f96f0e44" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.490 2 DEBUG nova.compute.manager [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Instance network_info: |[{"id": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "address": "fa:16:3e:cc:3c:4d", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5bf45b-d6", "ovs_interfaceid": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.495 2 DEBUG nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Start _get_guest_xml network_info=[{"id": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "address": "fa:16:3e:cc:3c:4d", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5bf45b-d6", "ovs_interfaceid": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.499 2 WARNING nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.501 2 DEBUG nova.virt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteHostMaintenanceStrategy-server-1250599539', uuid='4537359c-cc98-4f5e-87c4-2410f96f0e44'), owner=OwnerMeta(userid='2ed3ac10329446d8a5aae566951cae1e', username='tempest-TestExecuteHostMaintenanceStrategy-51576386-project-admin', projectid='c47758289b4c448c95f927a91d89e09f', projectname='tempest-TestExecuteHostMaintenanceStrategy-51576386'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "address": "fa:16:3e:cc:3c:4d", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5bf45b-d6", "ovs_interfaceid": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760026908.5010333) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.505 2 DEBUG nova.virt.libvirt.host [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.505 2 DEBUG nova.virt.libvirt.host [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.507 2 DEBUG nova.virt.libvirt.host [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.508 2 DEBUG nova.virt.libvirt.host [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.508 2 DEBUG nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.509 2 DEBUG nova.virt.hardware [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.509 2 DEBUG nova.virt.hardware [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.510 2 DEBUG nova.virt.hardware [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.510 2 DEBUG nova.virt.hardware [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.510 2 DEBUG nova.virt.hardware [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.510 2 DEBUG nova.virt.hardware [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.511 2 DEBUG nova.virt.hardware [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.511 2 DEBUG nova.virt.hardware [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.511 2 DEBUG nova.virt.hardware [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.511 2 DEBUG nova.virt.hardware [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.512 2 DEBUG nova.virt.hardware [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.516 2 DEBUG nova.virt.libvirt.vif [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:21:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-1250599539',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-1250599539',id=12,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c47758289b4c448c95f927a91d89e09f',ramdisk_id='',reservation_id='r-0ee0701h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-51576386',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-51576386-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:21:44Z,user_data=None,user_id='2ed3ac10329446d8a5aae566951cae1e',uuid=4537359c-cc98-4f5e-87c4-2410f96f0e44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "address": "fa:16:3e:cc:3c:4d", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5bf45b-d6", "ovs_interfaceid": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.516 2 DEBUG nova.network.os_vif_util [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Converting VIF {"id": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "address": "fa:16:3e:cc:3c:4d", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5bf45b-d6", "ovs_interfaceid": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.517 2 DEBUG nova.network.os_vif_util [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:3c:4d,bridge_name='br-int',has_traffic_filtering=True,id=ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa,network=Network(c78b28f7-c251-4d74-863e-8d5520acbae0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab5bf45b-d6') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:21:48 compute-0 nova_compute[117331]: 2025-10-09 16:21:48.518 2 DEBUG nova.objects.instance [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4537359c-cc98-4f5e-87c4-2410f96f0e44 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:21:48 compute-0 podman[145618]: 2025-10-09 16:21:48.874366167 +0000 UTC m=+0.090224933 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, config_id=edpm)
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.025 2 DEBUG nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:21:49 compute-0 nova_compute[117331]:   <uuid>4537359c-cc98-4f5e-87c4-2410f96f0e44</uuid>
Oct 09 16:21:49 compute-0 nova_compute[117331]:   <name>instance-0000000c</name>
Oct 09 16:21:49 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:21:49 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:21:49 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-1250599539</nova:name>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:21:48</nova:creationTime>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:21:49 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:21:49 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:21:49 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:21:49 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:21:49 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:21:49 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:21:49 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:21:49 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:21:49 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:21:49 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:21:49 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:21:49 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:21:49 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:21:49 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:21:49 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:21:49 compute-0 nova_compute[117331]:         <nova:user uuid="2ed3ac10329446d8a5aae566951cae1e">tempest-TestExecuteHostMaintenanceStrategy-51576386-project-admin</nova:user>
Oct 09 16:21:49 compute-0 nova_compute[117331]:         <nova:project uuid="c47758289b4c448c95f927a91d89e09f">tempest-TestExecuteHostMaintenanceStrategy-51576386</nova:project>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:21:49 compute-0 nova_compute[117331]:         <nova:port uuid="ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa">
Oct 09 16:21:49 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:21:49 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:21:49 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <system>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <entry name="serial">4537359c-cc98-4f5e-87c4-2410f96f0e44</entry>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <entry name="uuid">4537359c-cc98-4f5e-87c4-2410f96f0e44</entry>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     </system>
Oct 09 16:21:49 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:21:49 compute-0 nova_compute[117331]:   <os>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:   </os>
Oct 09 16:21:49 compute-0 nova_compute[117331]:   <features>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:   </features>
Oct 09 16:21:49 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:21:49 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:21:49 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk.config"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:cc:3c:4d"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <target dev="tapab5bf45b-d6"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/console.log" append="off"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <video>
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     </video>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:21:49 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:21:49 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:21:49 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:21:49 compute-0 nova_compute[117331]: </domain>
Oct 09 16:21:49 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.028 2 DEBUG nova.compute.manager [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Preparing to wait for external event network-vif-plugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.029 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Acquiring lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.029 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.030 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.032 2 DEBUG nova.virt.libvirt.vif [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:21:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-1250599539',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-1250599539',id=12,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c47758289b4c448c95f927a91d89e09f',ramdisk_id='',reservation_id='r-0ee0701h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-51576386',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-51576386-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:21:44Z,user_data=None,user_id='2ed3ac10329446d8a5aae566951cae1e',uuid=4537359c-cc98-4f5e-87c4-2410f96f0e44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "address": "fa:16:3e:cc:3c:4d", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5bf45b-d6", "ovs_interfaceid": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.032 2 DEBUG nova.network.os_vif_util [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Converting VIF {"id": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "address": "fa:16:3e:cc:3c:4d", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5bf45b-d6", "ovs_interfaceid": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.034 2 DEBUG nova.network.os_vif_util [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:3c:4d,bridge_name='br-int',has_traffic_filtering=True,id=ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa,network=Network(c78b28f7-c251-4d74-863e-8d5520acbae0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab5bf45b-d6') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.035 2 DEBUG os_vif [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:3c:4d,bridge_name='br-int',has_traffic_filtering=True,id=ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa,network=Network(c78b28f7-c251-4d74-863e-8d5520acbae0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab5bf45b-d6') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.037 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.038 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.040 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '0e11b8e2-05dd-5bc3-a78c-99f417ad0376', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab5bf45b-d6, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.049 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapab5bf45b-d6, col_values=(('qos', UUID('99d9a6b6-6206-4a52-a58f-0c9ee227159d')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.049 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapab5bf45b-d6, col_values=(('external_ids', {'iface-id': 'ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:3c:4d', 'vm-uuid': '4537359c-cc98-4f5e-87c4-2410f96f0e44'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:49 compute-0 NetworkManager[1028]: <info>  [1760026909.0524] manager: (tapab5bf45b-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:49 compute-0 nova_compute[117331]: 2025-10-09 16:21:49.063 2 INFO os_vif [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:3c:4d,bridge_name='br-int',has_traffic_filtering=True,id=ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa,network=Network(c78b28f7-c251-4d74-863e-8d5520acbae0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab5bf45b-d6')
Oct 09 16:21:50 compute-0 nova_compute[117331]: 2025-10-09 16:21:50.610 2 DEBUG nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:21:50 compute-0 nova_compute[117331]: 2025-10-09 16:21:50.612 2 DEBUG nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:21:50 compute-0 nova_compute[117331]: 2025-10-09 16:21:50.612 2 DEBUG nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] No VIF found with MAC fa:16:3e:cc:3c:4d, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:21:50 compute-0 nova_compute[117331]: 2025-10-09 16:21:50.614 2 INFO nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Using config drive
Oct 09 16:21:50 compute-0 nova_compute[117331]: 2025-10-09 16:21:50.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:50 compute-0 podman[145641]: 2025-10-09 16:21:50.90663326 +0000 UTC m=+0.139933235 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_controller, io.buildah.version=1.41.4)
Oct 09 16:21:51 compute-0 nova_compute[117331]: 2025-10-09 16:21:51.127 2 WARNING neutronclient.v2_0.client [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:21:51 compute-0 nova_compute[117331]: 2025-10-09 16:21:51.585 2 INFO nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Creating config drive at /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk.config
Oct 09 16:21:51 compute-0 nova_compute[117331]: 2025-10-09 16:21:51.595 2 DEBUG oslo_concurrency.processutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpkshds_da execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:21:51 compute-0 nova_compute[117331]: 2025-10-09 16:21:51.725 2 DEBUG oslo_concurrency.processutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpkshds_da" returned: 0 in 0.130s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:21:51 compute-0 kernel: tapab5bf45b-d6: entered promiscuous mode
Oct 09 16:21:51 compute-0 NetworkManager[1028]: <info>  [1760026911.8066] manager: (tapab5bf45b-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Oct 09 16:21:51 compute-0 ovn_controller[19752]: 2025-10-09T16:21:51Z|00121|binding|INFO|Claiming lport ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa for this chassis.
Oct 09 16:21:51 compute-0 ovn_controller[19752]: 2025-10-09T16:21:51Z|00122|binding|INFO|ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa: Claiming fa:16:3e:cc:3c:4d 10.100.0.14
Oct 09 16:21:51 compute-0 nova_compute[117331]: 2025-10-09 16:21:51.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:51 compute-0 nova_compute[117331]: 2025-10-09 16:21:51.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:51 compute-0 nova_compute[117331]: 2025-10-09 16:21:51.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:51 compute-0 nova_compute[117331]: 2025-10-09 16:21:51.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:51.829 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:3c:4d 10.100.0.14'], port_security=['fa:16:3e:cc:3c:4d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4537359c-cc98-4f5e-87c4-2410f96f0e44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c78b28f7-c251-4d74-863e-8d5520acbae0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c47758289b4c448c95f927a91d89e09f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37ce4296-f479-41b4-95bb-85317226edc9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8cdf3869-9618-417b-be95-470341634549, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:21:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:51.830 28613 INFO neutron.agent.ovn.metadata.agent [-] Port ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa in datapath c78b28f7-c251-4d74-863e-8d5520acbae0 bound to our chassis
Oct 09 16:21:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:51.831 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c78b28f7-c251-4d74-863e-8d5520acbae0
Oct 09 16:21:51 compute-0 systemd-udevd[145686]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:21:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:51.853 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[72b8d4a4-7423-42a8-8402-edec9afa9314]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:51.854 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc78b28f7-c1 in ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 09 16:21:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:51.857 139687 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc78b28f7-c0 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 09 16:21:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:51.857 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[7cbf0cf7-0d78-4baa-a30c-e20b8e1ad758]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:51 compute-0 systemd-machined[77487]: New machine qemu-8-instance-0000000c.
Oct 09 16:21:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:51.858 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[0329012b-b387-4261-a671-9997357404b1]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:51 compute-0 NetworkManager[1028]: <info>  [1760026911.8603] device (tapab5bf45b-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:21:51 compute-0 NetworkManager[1028]: <info>  [1760026911.8617] device (tapab5bf45b-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:21:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:51.880 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[0fa8395f-397f-4e45-a049-0f1b9c70dbcd]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:51 compute-0 ovn_controller[19752]: 2025-10-09T16:21:51Z|00123|binding|INFO|Setting lport ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa ovn-installed in OVS
Oct 09 16:21:51 compute-0 ovn_controller[19752]: 2025-10-09T16:21:51Z|00124|binding|INFO|Setting lport ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa up in Southbound
Oct 09 16:21:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:51.942 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4deb84-9015-46ad-aaa6-ed39b125ec0d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:51 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-0000000c.
Oct 09 16:21:51 compute-0 nova_compute[117331]: 2025-10-09 16:21:51.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:51.989 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[1c7a8dc2-28c1-4129-ad94-efd66aa4fb3d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:51 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:51.995 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[9c20b372-5679-4666-b703-8f66a78dfe63]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:51 compute-0 NetworkManager[1028]: <info>  [1760026911.9985] manager: (tapc78b28f7-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Oct 09 16:21:52 compute-0 systemd-udevd[145690]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.038 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[8217303d-bca3-4c45-8ea9-f8a97c8861cf]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.040 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[2d74a85d-f5b4-4ed9-b393-edc0040b4fef]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:52 compute-0 nova_compute[117331]: 2025-10-09 16:21:52.042 2 DEBUG nova.compute.manager [req-86654ac1-5d92-4df4-969b-4915f9937edd req-97e19a69-cf67-43a0-81f9-7f8ad5bf4c26 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received event network-vif-plugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:21:52 compute-0 nova_compute[117331]: 2025-10-09 16:21:52.043 2 DEBUG oslo_concurrency.lockutils [req-86654ac1-5d92-4df4-969b-4915f9937edd req-97e19a69-cf67-43a0-81f9-7f8ad5bf4c26 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:21:52 compute-0 nova_compute[117331]: 2025-10-09 16:21:52.043 2 DEBUG oslo_concurrency.lockutils [req-86654ac1-5d92-4df4-969b-4915f9937edd req-97e19a69-cf67-43a0-81f9-7f8ad5bf4c26 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:21:52 compute-0 nova_compute[117331]: 2025-10-09 16:21:52.043 2 DEBUG oslo_concurrency.lockutils [req-86654ac1-5d92-4df4-969b-4915f9937edd req-97e19a69-cf67-43a0-81f9-7f8ad5bf4c26 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:21:52 compute-0 nova_compute[117331]: 2025-10-09 16:21:52.044 2 DEBUG nova.compute.manager [req-86654ac1-5d92-4df4-969b-4915f9937edd req-97e19a69-cf67-43a0-81f9-7f8ad5bf4c26 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Processing event network-vif-plugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:21:52 compute-0 NetworkManager[1028]: <info>  [1760026912.0723] device (tapc78b28f7-c0): carrier: link connected
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.077 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[e185282a-27e6-4e35-b8fe-491f66cebb57]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.098 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f2757c-9df1-4bf1-b2ea-be1be03dfd33]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc78b28f7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:fa:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 185567, 'reachable_time': 37372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 145720, 'error': None, 'target': 'ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.117 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[7b221d80-7c3a-4bc8-a7ec-eca1b5f2e276]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:fa7f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 185567, 'tstamp': 185567}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 145721, 'error': None, 'target': 'ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.136 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[83423f30-bf44-49fd-a83d-82fd17a5fbb2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc78b28f7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:fa:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 185567, 'reachable_time': 37372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 145722, 'error': None, 'target': 'ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.177 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[1601bb29-0cf9-4481-8922-ec465c746ff0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.238 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[de1c1816-5706-4fd9-ba6e-0993fc53722d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.239 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc78b28f7-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.239 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.239 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc78b28f7-c0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:21:52 compute-0 NetworkManager[1028]: <info>  [1760026912.2418] manager: (tapc78b28f7-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Oct 09 16:21:52 compute-0 kernel: tapc78b28f7-c0: entered promiscuous mode
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.243 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc78b28f7-c0, col_values=(('external_ids', {'iface-id': 'ba8bb1df-8aa6-40e9-83d3-fc3264c1941e'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:21:52 compute-0 ovn_controller[19752]: 2025-10-09T16:21:52Z|00125|binding|INFO|Releasing lport ba8bb1df-8aa6-40e9-83d3-fc3264c1941e from this chassis (sb_readonly=0)
Oct 09 16:21:52 compute-0 nova_compute[117331]: 2025-10-09 16:21:52.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.257 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[f85341bf-8a6d-46ca-a6c1-24fccd82fef3]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.259 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.259 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.259 28613 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for c78b28f7-c251-4d74-863e-8d5520acbae0 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.259 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.260 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[c7565830-3354-490d-ba2c-30798615d44d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.260 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.260 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[f44f422f-a718-4642-86bb-1a11d089687b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.261 28613 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: global
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     log         /dev/log local0 debug
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     log-tag     haproxy-metadata-proxy-c78b28f7-c251-4d74-863e-8d5520acbae0
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     user        root
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     group       root
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     maxconn     1024
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     pidfile     /var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     daemon
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: defaults
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     log global
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     mode http
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     option httplog
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     option dontlognull
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     option http-server-close
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     option forwardfor
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     retries                 3
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     timeout http-request    30s
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     timeout connect         30s
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     timeout client          32s
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     timeout server          32s
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     timeout http-keep-alive 30s
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: listen listener
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     bind 169.254.169.254:80
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:     http-request add-header X-OVN-Network-ID c78b28f7-c251-4d74-863e-8d5520acbae0
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 09 16:21:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:21:52.262 28613 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0', 'env', 'PROCESS_TAG=haproxy-c78b28f7-c251-4d74-863e-8d5520acbae0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c78b28f7-c251-4d74-863e-8d5520acbae0.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 09 16:21:52 compute-0 podman[145760]: 2025-10-09 16:21:52.691648594 +0000 UTC m=+0.064006748 container create 43deaebc7c8cb50ee92089f3e78b68b4b50312e14279ca186ba0e8a2a4e99d92 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 09 16:21:52 compute-0 systemd[1]: Started libpod-conmon-43deaebc7c8cb50ee92089f3e78b68b4b50312e14279ca186ba0e8a2a4e99d92.scope.
Oct 09 16:21:52 compute-0 podman[145760]: 2025-10-09 16:21:52.653565162 +0000 UTC m=+0.025923346 image pull 4e15c189a95c5dcd1203c76fe304de5c8f8616da9a258ca168cc54a517f49b87 38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 09 16:21:52 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eb06f9a897e180be4ca62f22ca6f08606777a6e2185fe7a945461a2c2ab85fb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 16:21:52 compute-0 podman[145760]: 2025-10-09 16:21:52.797041139 +0000 UTC m=+0.169399303 container init 43deaebc7c8cb50ee92089f3e78b68b4b50312e14279ca186ba0e8a2a4e99d92 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4)
Oct 09 16:21:52 compute-0 podman[145760]: 2025-10-09 16:21:52.807689258 +0000 UTC m=+0.180047402 container start 43deaebc7c8cb50ee92089f3e78b68b4b50312e14279ca186ba0e8a2a4e99d92 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Oct 09 16:21:52 compute-0 neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0[145775]: [NOTICE]   (145779) : New worker (145781) forked
Oct 09 16:21:52 compute-0 neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0[145775]: [NOTICE]   (145779) : Loading success.
Oct 09 16:21:52 compute-0 nova_compute[117331]: 2025-10-09 16:21:52.914 2 DEBUG nova.compute.manager [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:21:52 compute-0 nova_compute[117331]: 2025-10-09 16:21:52.919 2 DEBUG nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:21:52 compute-0 nova_compute[117331]: 2025-10-09 16:21:52.923 2 INFO nova.virt.libvirt.driver [-] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Instance spawned successfully.
Oct 09 16:21:52 compute-0 nova_compute[117331]: 2025-10-09 16:21:52.924 2 DEBUG nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:21:53 compute-0 nova_compute[117331]: 2025-10-09 16:21:53.445 2 DEBUG nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:21:53 compute-0 nova_compute[117331]: 2025-10-09 16:21:53.445 2 DEBUG nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:21:53 compute-0 nova_compute[117331]: 2025-10-09 16:21:53.446 2 DEBUG nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:21:53 compute-0 nova_compute[117331]: 2025-10-09 16:21:53.447 2 DEBUG nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:21:53 compute-0 nova_compute[117331]: 2025-10-09 16:21:53.447 2 DEBUG nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:21:53 compute-0 nova_compute[117331]: 2025-10-09 16:21:53.448 2 DEBUG nova.virt.libvirt.driver [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:21:53 compute-0 nova_compute[117331]: 2025-10-09 16:21:53.963 2 INFO nova.compute.manager [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Took 8.38 seconds to spawn the instance on the hypervisor.
Oct 09 16:21:53 compute-0 nova_compute[117331]: 2025-10-09 16:21:53.964 2 DEBUG nova.compute.manager [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:21:54 compute-0 nova_compute[117331]: 2025-10-09 16:21:54.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:54 compute-0 nova_compute[117331]: 2025-10-09 16:21:54.092 2 DEBUG nova.compute.manager [req-01d300ee-47fb-437d-8512-f136044c01b0 req-39826c66-1e5c-407f-b0b7-ba268b8d32d5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received event network-vif-plugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:21:54 compute-0 nova_compute[117331]: 2025-10-09 16:21:54.092 2 DEBUG oslo_concurrency.lockutils [req-01d300ee-47fb-437d-8512-f136044c01b0 req-39826c66-1e5c-407f-b0b7-ba268b8d32d5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:21:54 compute-0 nova_compute[117331]: 2025-10-09 16:21:54.093 2 DEBUG oslo_concurrency.lockutils [req-01d300ee-47fb-437d-8512-f136044c01b0 req-39826c66-1e5c-407f-b0b7-ba268b8d32d5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:21:54 compute-0 nova_compute[117331]: 2025-10-09 16:21:54.093 2 DEBUG oslo_concurrency.lockutils [req-01d300ee-47fb-437d-8512-f136044c01b0 req-39826c66-1e5c-407f-b0b7-ba268b8d32d5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:21:54 compute-0 nova_compute[117331]: 2025-10-09 16:21:54.094 2 DEBUG nova.compute.manager [req-01d300ee-47fb-437d-8512-f136044c01b0 req-39826c66-1e5c-407f-b0b7-ba268b8d32d5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] No waiting events found dispatching network-vif-plugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:21:54 compute-0 nova_compute[117331]: 2025-10-09 16:21:54.094 2 WARNING nova.compute.manager [req-01d300ee-47fb-437d-8512-f136044c01b0 req-39826c66-1e5c-407f-b0b7-ba268b8d32d5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received unexpected event network-vif-plugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa for instance with vm_state active and task_state None.
Oct 09 16:21:54 compute-0 nova_compute[117331]: 2025-10-09 16:21:54.503 2 INFO nova.compute.manager [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Took 13.60 seconds to build instance.
Oct 09 16:21:55 compute-0 nova_compute[117331]: 2025-10-09 16:21:55.009 2 DEBUG oslo_concurrency.lockutils [None req-e40473d0-19c5-49b5-95f8-f15dcccc1600 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.117s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:21:55 compute-0 nova_compute[117331]: 2025-10-09 16:21:55.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:59 compute-0 nova_compute[117331]: 2025-10-09 16:21:59.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:21:59 compute-0 podman[127775]: time="2025-10-09T16:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:21:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:21:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3480 "" "Go-http-client/1.1"
Oct 09 16:22:00 compute-0 nova_compute[117331]: 2025-10-09 16:22:00.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:01 compute-0 openstack_network_exporter[129925]: ERROR   16:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:22:01 compute-0 openstack_network_exporter[129925]: ERROR   16:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:22:01 compute-0 openstack_network_exporter[129925]: ERROR   16:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:22:01 compute-0 openstack_network_exporter[129925]: ERROR   16:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:22:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:22:01 compute-0 openstack_network_exporter[129925]: ERROR   16:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:22:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:22:01 compute-0 podman[145790]: 2025-10-09 16:22:01.854874126 +0000 UTC m=+0.076927999 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:22:02 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:02.214 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:22:02 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:02.215 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:22:02 compute-0 nova_compute[117331]: 2025-10-09 16:22:02.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:04 compute-0 nova_compute[117331]: 2025-10-09 16:22:04.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:04 compute-0 ovn_controller[19752]: 2025-10-09T16:22:04Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cc:3c:4d 10.100.0.14
Oct 09 16:22:04 compute-0 ovn_controller[19752]: 2025-10-09T16:22:04Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cc:3c:4d 10.100.0.14
Oct 09 16:22:05 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 09 16:22:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:05.217 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:22:05 compute-0 podman[145834]: 2025-10-09 16:22:05.219603681 +0000 UTC m=+0.075930327 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:22:05 compute-0 nova_compute[117331]: 2025-10-09 16:22:05.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:09 compute-0 nova_compute[117331]: 2025-10-09 16:22:09.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:10 compute-0 podman[145859]: 2025-10-09 16:22:10.833731851 +0000 UTC m=+0.061453577 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Oct 09 16:22:10 compute-0 podman[145860]: 2025-10-09 16:22:10.844160893 +0000 UTC m=+0.060165096 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251007, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 09 16:22:10 compute-0 nova_compute[117331]: 2025-10-09 16:22:10.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:14 compute-0 nova_compute[117331]: 2025-10-09 16:22:14.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:14 compute-0 nova_compute[117331]: 2025-10-09 16:22:14.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:22:15 compute-0 nova_compute[117331]: 2025-10-09 16:22:15.308 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:22:15 compute-0 nova_compute[117331]: 2025-10-09 16:22:15.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:16 compute-0 nova_compute[117331]: 2025-10-09 16:22:16.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:22:16 compute-0 nova_compute[117331]: 2025-10-09 16:22:16.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:22:18 compute-0 nova_compute[117331]: 2025-10-09 16:22:18.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:22:18 compute-0 nova_compute[117331]: 2025-10-09 16:22:18.822 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:22:18 compute-0 nova_compute[117331]: 2025-10-09 16:22:18.823 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:22:18 compute-0 nova_compute[117331]: 2025-10-09 16:22:18.823 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:22:18 compute-0 nova_compute[117331]: 2025-10-09 16:22:18.824 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:22:19 compute-0 nova_compute[117331]: 2025-10-09 16:22:19.057 2 DEBUG nova.compute.manager [None req-faa80e9f-a5b7-4c1a-bedf-c9ee49f89977 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Adding trait COMPUTE_STATUS_DISABLED to compute node resource provider 593051b8-2000-437f-a915-2616fc8b1671 in placement. update_compute_provider_status /usr/lib/python3.12/site-packages/nova/compute/manager.py:635
Oct 09 16:22:19 compute-0 nova_compute[117331]: 2025-10-09 16:22:19.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:19 compute-0 nova_compute[117331]: 2025-10-09 16:22:19.112 2 DEBUG nova.compute.provider_tree [None req-faa80e9f-a5b7-4c1a-bedf-c9ee49f89977 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Updating resource provider 593051b8-2000-437f-a915-2616fc8b1671 generation from 16 to 18 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Oct 09 16:22:19 compute-0 podman[145899]: 2025-10-09 16:22:19.862885516 +0000 UTC m=+0.081662410 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 09 16:22:19 compute-0 nova_compute[117331]: 2025-10-09 16:22:19.881 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:22:19 compute-0 nova_compute[117331]: 2025-10-09 16:22:19.968 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:22:19 compute-0 nova_compute[117331]: 2025-10-09 16:22:19.969 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:22:20 compute-0 nova_compute[117331]: 2025-10-09 16:22:20.047 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:22:20 compute-0 nova_compute[117331]: 2025-10-09 16:22:20.247 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:22:20 compute-0 nova_compute[117331]: 2025-10-09 16:22:20.249 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:22:20 compute-0 nova_compute[117331]: 2025-10-09 16:22:20.271 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.022s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:22:20 compute-0 nova_compute[117331]: 2025-10-09 16:22:20.272 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5998MB free_disk=73.23322677612305GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:22:20 compute-0 nova_compute[117331]: 2025-10-09 16:22:20.272 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:22:20 compute-0 nova_compute[117331]: 2025-10-09 16:22:20.273 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:22:20 compute-0 nova_compute[117331]: 2025-10-09 16:22:20.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:21 compute-0 nova_compute[117331]: 2025-10-09 16:22:21.332 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance 4537359c-cc98-4f5e-87c4-2410f96f0e44 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:22:21 compute-0 nova_compute[117331]: 2025-10-09 16:22:21.332 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:22:21 compute-0 nova_compute[117331]: 2025-10-09 16:22:21.333 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:22:20 up 31 min,  0 user,  load average: 0.56, 0.35, 0.30\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_c47758289b4c448c95f927a91d89e09f': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:22:21 compute-0 nova_compute[117331]: 2025-10-09 16:22:21.376 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:22:21 compute-0 nova_compute[117331]: 2025-10-09 16:22:21.887 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:22:21 compute-0 podman[145928]: 2025-10-09 16:22:21.903691922 +0000 UTC m=+0.134852293 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 09 16:22:22 compute-0 nova_compute[117331]: 2025-10-09 16:22:22.396 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:22:22 compute-0 nova_compute[117331]: 2025-10-09 16:22:22.396 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.124s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:22:22 compute-0 nova_compute[117331]: 2025-10-09 16:22:22.397 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:22:22 compute-0 nova_compute[117331]: 2025-10-09 16:22:22.397 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 09 16:22:22 compute-0 nova_compute[117331]: 2025-10-09 16:22:22.903 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 09 16:22:22 compute-0 nova_compute[117331]: 2025-10-09 16:22:22.904 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:22:22 compute-0 nova_compute[117331]: 2025-10-09 16:22:22.904 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 09 16:22:23 compute-0 nova_compute[117331]: 2025-10-09 16:22:23.410 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:22:24 compute-0 nova_compute[117331]: 2025-10-09 16:22:24.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:25 compute-0 nova_compute[117331]: 2025-10-09 16:22:25.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:26 compute-0 nova_compute[117331]: 2025-10-09 16:22:26.078 2 DEBUG nova.virt.libvirt.driver [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Check if temp file /var/lib/nova/instances/tmpgshfqani exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Oct 09 16:22:26 compute-0 nova_compute[117331]: 2025-10-09 16:22:26.085 2 DEBUG nova.compute.manager [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpgshfqani',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='4537359c-cc98-4f5e-87c4-2410f96f0e44',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Oct 09 16:22:26 compute-0 nova_compute[117331]: 2025-10-09 16:22:26.916 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:22:26 compute-0 nova_compute[117331]: 2025-10-09 16:22:26.917 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:22:27 compute-0 nova_compute[117331]: 2025-10-09 16:22:27.425 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:22:27 compute-0 nova_compute[117331]: 2025-10-09 16:22:27.426 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:22:27 compute-0 nova_compute[117331]: 2025-10-09 16:22:27.426 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:22:29 compute-0 nova_compute[117331]: 2025-10-09 16:22:29.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:29 compute-0 podman[127775]: time="2025-10-09T16:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:22:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:22:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3486 "" "Go-http-client/1.1"
Oct 09 16:22:30 compute-0 nova_compute[117331]: 2025-10-09 16:22:30.335 2 DEBUG oslo_concurrency.processutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:22:30 compute-0 nova_compute[117331]: 2025-10-09 16:22:30.397 2 DEBUG oslo_concurrency.processutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:22:30 compute-0 nova_compute[117331]: 2025-10-09 16:22:30.399 2 DEBUG oslo_concurrency.processutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:22:30 compute-0 nova_compute[117331]: 2025-10-09 16:22:30.463 2 DEBUG oslo_concurrency.processutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:22:30 compute-0 nova_compute[117331]: 2025-10-09 16:22:30.465 2 DEBUG nova.compute.manager [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Preparing to wait for external event network-vif-plugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:22:30 compute-0 nova_compute[117331]: 2025-10-09 16:22:30.465 2 DEBUG oslo_concurrency.lockutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:22:30 compute-0 nova_compute[117331]: 2025-10-09 16:22:30.465 2 DEBUG oslo_concurrency.lockutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:22:30 compute-0 nova_compute[117331]: 2025-10-09 16:22:30.465 2 DEBUG oslo_concurrency.lockutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:22:30 compute-0 nova_compute[117331]: 2025-10-09 16:22:30.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:31 compute-0 openstack_network_exporter[129925]: ERROR   16:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:22:31 compute-0 openstack_network_exporter[129925]: ERROR   16:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:22:31 compute-0 openstack_network_exporter[129925]: ERROR   16:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:22:31 compute-0 openstack_network_exporter[129925]: ERROR   16:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:22:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:22:31 compute-0 openstack_network_exporter[129925]: ERROR   16:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:22:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:22:32 compute-0 podman[145960]: 2025-10-09 16:22:32.837476428 +0000 UTC m=+0.067163788 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Oct 09 16:22:34 compute-0 nova_compute[117331]: 2025-10-09 16:22:34.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:35.307 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:22:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:35.307 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:22:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:35.308 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:22:35 compute-0 nova_compute[117331]: 2025-10-09 16:22:35.642 2 DEBUG nova.compute.manager [req-ea0f2517-85fc-4a5a-ac52-17cc83d1d5b1 req-21682f36-3775-4434-b9aa-21f578af9376 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received event network-vif-unplugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:22:35 compute-0 nova_compute[117331]: 2025-10-09 16:22:35.642 2 DEBUG oslo_concurrency.lockutils [req-ea0f2517-85fc-4a5a-ac52-17cc83d1d5b1 req-21682f36-3775-4434-b9aa-21f578af9376 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:22:35 compute-0 nova_compute[117331]: 2025-10-09 16:22:35.642 2 DEBUG oslo_concurrency.lockutils [req-ea0f2517-85fc-4a5a-ac52-17cc83d1d5b1 req-21682f36-3775-4434-b9aa-21f578af9376 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:22:35 compute-0 nova_compute[117331]: 2025-10-09 16:22:35.642 2 DEBUG oslo_concurrency.lockutils [req-ea0f2517-85fc-4a5a-ac52-17cc83d1d5b1 req-21682f36-3775-4434-b9aa-21f578af9376 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:22:35 compute-0 nova_compute[117331]: 2025-10-09 16:22:35.642 2 DEBUG nova.compute.manager [req-ea0f2517-85fc-4a5a-ac52-17cc83d1d5b1 req-21682f36-3775-4434-b9aa-21f578af9376 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] No event matching network-vif-unplugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa in dict_keys([('network-vif-plugged', 'ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Oct 09 16:22:35 compute-0 nova_compute[117331]: 2025-10-09 16:22:35.642 2 DEBUG nova.compute.manager [req-ea0f2517-85fc-4a5a-ac52-17cc83d1d5b1 req-21682f36-3775-4434-b9aa-21f578af9376 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received event network-vif-unplugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:22:35 compute-0 podman[145982]: 2025-10-09 16:22:35.833288841 +0000 UTC m=+0.064147033 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:22:35 compute-0 nova_compute[117331]: 2025-10-09 16:22:35.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:36 compute-0 nova_compute[117331]: 2025-10-09 16:22:36.484 2 INFO nova.compute.manager [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Took 6.02 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Oct 09 16:22:37 compute-0 nova_compute[117331]: 2025-10-09 16:22:37.695 2 DEBUG nova.compute.manager [req-2b49f102-eecf-4f56-b2ff-d1a0dbedbe10 req-7254c7ca-c504-4111-b723-2e94b1e6c5e7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received event network-vif-plugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:22:37 compute-0 nova_compute[117331]: 2025-10-09 16:22:37.695 2 DEBUG oslo_concurrency.lockutils [req-2b49f102-eecf-4f56-b2ff-d1a0dbedbe10 req-7254c7ca-c504-4111-b723-2e94b1e6c5e7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:22:37 compute-0 nova_compute[117331]: 2025-10-09 16:22:37.695 2 DEBUG oslo_concurrency.lockutils [req-2b49f102-eecf-4f56-b2ff-d1a0dbedbe10 req-7254c7ca-c504-4111-b723-2e94b1e6c5e7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:22:37 compute-0 nova_compute[117331]: 2025-10-09 16:22:37.695 2 DEBUG oslo_concurrency.lockutils [req-2b49f102-eecf-4f56-b2ff-d1a0dbedbe10 req-7254c7ca-c504-4111-b723-2e94b1e6c5e7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:22:37 compute-0 nova_compute[117331]: 2025-10-09 16:22:37.695 2 DEBUG nova.compute.manager [req-2b49f102-eecf-4f56-b2ff-d1a0dbedbe10 req-7254c7ca-c504-4111-b723-2e94b1e6c5e7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Processing event network-vif-plugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:22:37 compute-0 nova_compute[117331]: 2025-10-09 16:22:37.696 2 DEBUG nova.compute.manager [req-2b49f102-eecf-4f56-b2ff-d1a0dbedbe10 req-7254c7ca-c504-4111-b723-2e94b1e6c5e7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received event network-changed-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:22:37 compute-0 nova_compute[117331]: 2025-10-09 16:22:37.696 2 DEBUG nova.compute.manager [req-2b49f102-eecf-4f56-b2ff-d1a0dbedbe10 req-7254c7ca-c504-4111-b723-2e94b1e6c5e7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Refreshing instance network info cache due to event network-changed-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:22:37 compute-0 nova_compute[117331]: 2025-10-09 16:22:37.696 2 DEBUG oslo_concurrency.lockutils [req-2b49f102-eecf-4f56-b2ff-d1a0dbedbe10 req-7254c7ca-c504-4111-b723-2e94b1e6c5e7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-4537359c-cc98-4f5e-87c4-2410f96f0e44" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:22:37 compute-0 nova_compute[117331]: 2025-10-09 16:22:37.696 2 DEBUG oslo_concurrency.lockutils [req-2b49f102-eecf-4f56-b2ff-d1a0dbedbe10 req-7254c7ca-c504-4111-b723-2e94b1e6c5e7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-4537359c-cc98-4f5e-87c4-2410f96f0e44" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:22:37 compute-0 nova_compute[117331]: 2025-10-09 16:22:37.696 2 DEBUG nova.network.neutron [req-2b49f102-eecf-4f56-b2ff-d1a0dbedbe10 req-7254c7ca-c504-4111-b723-2e94b1e6c5e7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Refreshing network info cache for port ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:22:37 compute-0 nova_compute[117331]: 2025-10-09 16:22:37.697 2 DEBUG nova.compute.manager [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:22:38 compute-0 nova_compute[117331]: 2025-10-09 16:22:38.203 2 WARNING neutronclient.v2_0.client [req-2b49f102-eecf-4f56-b2ff-d1a0dbedbe10 req-7254c7ca-c504-4111-b723-2e94b1e6c5e7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:22:38 compute-0 nova_compute[117331]: 2025-10-09 16:22:38.212 2 DEBUG nova.compute.manager [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpgshfqani',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='4537359c-cc98-4f5e-87c4-2410f96f0e44',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(32db2a54-1b64-43b7-85cd-77132658a2ef),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Oct 09 16:22:38 compute-0 nova_compute[117331]: 2025-10-09 16:22:38.733 2 DEBUG nova.objects.instance [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'migration_context' on Instance uuid 4537359c-cc98-4f5e-87c4-2410f96f0e44 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:22:38 compute-0 nova_compute[117331]: 2025-10-09 16:22:38.735 2 DEBUG nova.virt.libvirt.driver [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Oct 09 16:22:38 compute-0 nova_compute[117331]: 2025-10-09 16:22:38.737 2 DEBUG nova.virt.libvirt.driver [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:22:38 compute-0 nova_compute[117331]: 2025-10-09 16:22:38.737 2 DEBUG nova.virt.libvirt.driver [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:22:38 compute-0 nova_compute[117331]: 2025-10-09 16:22:38.874 2 WARNING neutronclient.v2_0.client [req-2b49f102-eecf-4f56-b2ff-d1a0dbedbe10 req-7254c7ca-c504-4111-b723-2e94b1e6c5e7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:22:39 compute-0 nova_compute[117331]: 2025-10-09 16:22:39.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:39 compute-0 nova_compute[117331]: 2025-10-09 16:22:39.166 2 DEBUG nova.network.neutron [req-2b49f102-eecf-4f56-b2ff-d1a0dbedbe10 req-7254c7ca-c504-4111-b723-2e94b1e6c5e7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Updated VIF entry in instance network info cache for port ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Oct 09 16:22:39 compute-0 nova_compute[117331]: 2025-10-09 16:22:39.167 2 DEBUG nova.network.neutron [req-2b49f102-eecf-4f56-b2ff-d1a0dbedbe10 req-7254c7ca-c504-4111-b723-2e94b1e6c5e7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Updating instance_info_cache with network_info: [{"id": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "address": "fa:16:3e:cc:3c:4d", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5bf45b-d6", "ovs_interfaceid": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:22:39 compute-0 nova_compute[117331]: 2025-10-09 16:22:39.240 2 DEBUG nova.virt.libvirt.driver [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:22:39 compute-0 nova_compute[117331]: 2025-10-09 16:22:39.241 2 DEBUG nova.virt.libvirt.driver [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:22:39 compute-0 nova_compute[117331]: 2025-10-09 16:22:39.249 2 DEBUG nova.virt.libvirt.vif [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:21:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-1250599539',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-1250599539',id=12,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:21:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c47758289b4c448c95f927a91d89e09f',ramdisk_id='',reservation_id='r-0ee0701h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-51576386',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-51576386-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:21:54Z,user_data=None,user_id='2ed3ac10329446d8a5aae566951cae1e',uuid=4537359c-cc98-4f5e-87c4-2410f96f0e44,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "address": "fa:16:3e:cc:3c:4d", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapab5bf45b-d6", "ovs_interfaceid": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:22:39 compute-0 nova_compute[117331]: 2025-10-09 16:22:39.249 2 DEBUG nova.network.os_vif_util [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "address": "fa:16:3e:cc:3c:4d", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapab5bf45b-d6", "ovs_interfaceid": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:22:39 compute-0 nova_compute[117331]: 2025-10-09 16:22:39.250 2 DEBUG nova.network.os_vif_util [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:3c:4d,bridge_name='br-int',has_traffic_filtering=True,id=ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa,network=Network(c78b28f7-c251-4d74-863e-8d5520acbae0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab5bf45b-d6') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:22:39 compute-0 nova_compute[117331]: 2025-10-09 16:22:39.251 2 DEBUG nova.virt.libvirt.migration [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Updating guest XML with vif config: <interface type="ethernet">
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <mac address="fa:16:3e:cc:3c:4d"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <model type="virtio"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <mtu size="1442"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <target dev="tapab5bf45b-d6"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]: </interface>
Oct 09 16:22:39 compute-0 nova_compute[117331]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Oct 09 16:22:39 compute-0 nova_compute[117331]: 2025-10-09 16:22:39.251 2 DEBUG nova.virt.libvirt.migration [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <name>instance-0000000c</name>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <uuid>4537359c-cc98-4f5e-87c4-2410f96f0e44</uuid>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-1250599539</nova:name>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:21:48</nova:creationTime>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:22:39 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:22:39 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:user uuid="2ed3ac10329446d8a5aae566951cae1e">tempest-TestExecuteHostMaintenanceStrategy-51576386-project-admin</nova:user>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:project uuid="c47758289b4c448c95f927a91d89e09f">tempest-TestExecuteHostMaintenanceStrategy-51576386</nova:project>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:port uuid="ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa">
Oct 09 16:22:39 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <system>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <entry name="serial">4537359c-cc98-4f5e-87c4-2410f96f0e44</entry>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <entry name="uuid">4537359c-cc98-4f5e-87c4-2410f96f0e44</entry>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </system>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <os>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </os>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <features>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </features>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk.config"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:cc:3c:4d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapab5bf45b-d6"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/console.log" append="off"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       </target>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/console.log" append="off"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </console>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </input>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <video>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </video>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]: </domain>
Oct 09 16:22:39 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Oct 09 16:22:39 compute-0 nova_compute[117331]: 2025-10-09 16:22:39.253 2 DEBUG nova.virt.libvirt.migration [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <name>instance-0000000c</name>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <uuid>4537359c-cc98-4f5e-87c4-2410f96f0e44</uuid>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-1250599539</nova:name>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:21:48</nova:creationTime>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:22:39 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:22:39 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:user uuid="2ed3ac10329446d8a5aae566951cae1e">tempest-TestExecuteHostMaintenanceStrategy-51576386-project-admin</nova:user>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:project uuid="c47758289b4c448c95f927a91d89e09f">tempest-TestExecuteHostMaintenanceStrategy-51576386</nova:project>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:port uuid="ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa">
Oct 09 16:22:39 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <system>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <entry name="serial">4537359c-cc98-4f5e-87c4-2410f96f0e44</entry>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <entry name="uuid">4537359c-cc98-4f5e-87c4-2410f96f0e44</entry>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </system>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <os>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </os>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <features>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </features>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk.config"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:cc:3c:4d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapab5bf45b-d6"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/console.log" append="off"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       </target>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/console.log" append="off"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </console>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </input>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <video>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </video>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]: </domain>
Oct 09 16:22:39 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Oct 09 16:22:39 compute-0 nova_compute[117331]: 2025-10-09 16:22:39.253 2 DEBUG nova.virt.libvirt.migration [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _update_pci_xml output xml=<domain type="kvm">
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <name>instance-0000000c</name>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <uuid>4537359c-cc98-4f5e-87c4-2410f96f0e44</uuid>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-1250599539</nova:name>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:21:48</nova:creationTime>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:22:39 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:22:39 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:user uuid="2ed3ac10329446d8a5aae566951cae1e">tempest-TestExecuteHostMaintenanceStrategy-51576386-project-admin</nova:user>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:project uuid="c47758289b4c448c95f927a91d89e09f">tempest-TestExecuteHostMaintenanceStrategy-51576386</nova:project>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <nova:port uuid="ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa">
Oct 09 16:22:39 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <system>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <entry name="serial">4537359c-cc98-4f5e-87c4-2410f96f0e44</entry>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <entry name="uuid">4537359c-cc98-4f5e-87c4-2410f96f0e44</entry>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </system>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <os>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </os>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <features>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </features>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/disk.config"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:cc:3c:4d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapab5bf45b-d6"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/console.log" append="off"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:22:39 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       </target>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44/console.log" append="off"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </console>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </input>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <video>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </video>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:22:39 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:22:39 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:22:39 compute-0 nova_compute[117331]: </domain>
Oct 09 16:22:39 compute-0 nova_compute[117331]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Oct 09 16:22:39 compute-0 nova_compute[117331]: 2025-10-09 16:22:39.254 2 DEBUG nova.virt.libvirt.driver [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Oct 09 16:22:39 compute-0 ovn_controller[19752]: 2025-10-09T16:22:39Z|00126|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Oct 09 16:22:39 compute-0 nova_compute[117331]: 2025-10-09 16:22:39.674 2 DEBUG oslo_concurrency.lockutils [req-2b49f102-eecf-4f56-b2ff-d1a0dbedbe10 req-7254c7ca-c504-4111-b723-2e94b1e6c5e7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-4537359c-cc98-4f5e-87c4-2410f96f0e44" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:22:39 compute-0 nova_compute[117331]: 2025-10-09 16:22:39.744 2 DEBUG nova.virt.libvirt.migration [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Current None elapsed 1 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:22:39 compute-0 nova_compute[117331]: 2025-10-09 16:22:39.745 2 INFO nova.virt.libvirt.migration [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Increasing downtime to 50 ms after 0 sec elapsed time
Oct 09 16:22:40 compute-0 nova_compute[117331]: 2025-10-09 16:22:40.761 2 INFO nova.virt.libvirt.driver [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Oct 09 16:22:40 compute-0 nova_compute[117331]: 2025-10-09 16:22:40.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:41 compute-0 nova_compute[117331]: 2025-10-09 16:22:41.264 2 DEBUG nova.virt.libvirt.migration [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Current 50 elapsed 2 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:22:41 compute-0 nova_compute[117331]: 2025-10-09 16:22:41.264 2 DEBUG nova.virt.libvirt.migration [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Oct 09 16:22:41 compute-0 nova_compute[117331]: 2025-10-09 16:22:41.771 2 DEBUG nova.virt.libvirt.migration [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Current 50 elapsed 3 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:22:41 compute-0 nova_compute[117331]: 2025-10-09 16:22:41.771 2 DEBUG nova.virt.libvirt.migration [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Oct 09 16:22:41 compute-0 podman[146013]: 2025-10-09 16:22:41.842357961 +0000 UTC m=+0.065405603 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 09 16:22:41 compute-0 podman[146014]: 2025-10-09 16:22:41.84262237 +0000 UTC m=+0.060830657 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible)
Oct 09 16:22:41 compute-0 kernel: tapab5bf45b-d6 (unregistering): left promiscuous mode
Oct 09 16:22:41 compute-0 NetworkManager[1028]: <info>  [1760026961.9252] device (tapab5bf45b-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:22:41 compute-0 nova_compute[117331]: 2025-10-09 16:22:41.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:41 compute-0 nova_compute[117331]: 2025-10-09 16:22:41.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:41 compute-0 ovn_controller[19752]: 2025-10-09T16:22:41Z|00127|binding|INFO|Releasing lport ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa from this chassis (sb_readonly=0)
Oct 09 16:22:41 compute-0 ovn_controller[19752]: 2025-10-09T16:22:41Z|00128|binding|INFO|Setting lport ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa down in Southbound
Oct 09 16:22:41 compute-0 ovn_controller[19752]: 2025-10-09T16:22:41Z|00129|binding|INFO|Removing iface tapab5bf45b-d6 ovn-installed in OVS
Oct 09 16:22:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:41.939 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:3c:4d 10.100.0.14'], port_security=['fa:16:3e:cc:3c:4d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '2bd8bf21-1f6b-42c9-9656-9a72fa8dcbf3'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4537359c-cc98-4f5e-87c4-2410f96f0e44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c78b28f7-c251-4d74-863e-8d5520acbae0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c47758289b4c448c95f927a91d89e09f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '37ce4296-f479-41b4-95bb-85317226edc9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8cdf3869-9618-417b-be95-470341634549, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:22:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:41.940 28613 INFO neutron.agent.ovn.metadata.agent [-] Port ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa in datapath c78b28f7-c251-4d74-863e-8d5520acbae0 unbound from our chassis
Oct 09 16:22:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:41.941 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c78b28f7-c251-4d74-863e-8d5520acbae0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:22:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:41.942 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[f17b32dd-6a29-4f5f-970f-28abb3a2cb6a]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:22:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:41.943 28613 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0 namespace which is not needed anymore
Oct 09 16:22:41 compute-0 nova_compute[117331]: 2025-10-09 16:22:41.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:42 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct 09 16:22:42 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000c.scope: Consumed 14.674s CPU time.
Oct 09 16:22:42 compute-0 systemd-machined[77487]: Machine qemu-8-instance-0000000c terminated.
Oct 09 16:22:42 compute-0 neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0[145775]: [NOTICE]   (145779) : haproxy version is 3.0.5-8e879a5
Oct 09 16:22:42 compute-0 neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0[145775]: [NOTICE]   (145779) : path to executable is /usr/sbin/haproxy
Oct 09 16:22:42 compute-0 neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0[145775]: [WARNING]  (145779) : Exiting Master process...
Oct 09 16:22:42 compute-0 podman[146077]: 2025-10-09 16:22:42.0716643 +0000 UTC m=+0.035453629 container kill 43deaebc7c8cb50ee92089f3e78b68b4b50312e14279ca186ba0e8a2a4e99d92 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 09 16:22:42 compute-0 neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0[145775]: [ALERT]    (145779) : Current worker (145781) exited with code 143 (Terminated)
Oct 09 16:22:42 compute-0 neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0[145775]: [WARNING]  (145779) : All workers exited. Exiting... (0)
Oct 09 16:22:42 compute-0 systemd[1]: libpod-43deaebc7c8cb50ee92089f3e78b68b4b50312e14279ca186ba0e8a2a4e99d92.scope: Deactivated successfully.
Oct 09 16:22:42 compute-0 podman[146092]: 2025-10-09 16:22:42.113304455 +0000 UTC m=+0.024873252 container died 43deaebc7c8cb50ee92089f3e78b68b4b50312e14279ca186ba0e8a2a4e99d92 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 09 16:22:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-43deaebc7c8cb50ee92089f3e78b68b4b50312e14279ca186ba0e8a2a4e99d92-userdata-shm.mount: Deactivated successfully.
Oct 09 16:22:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-1eb06f9a897e180be4ca62f22ca6f08606777a6e2185fe7a945461a2c2ab85fb-merged.mount: Deactivated successfully.
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.158 2 DEBUG nova.virt.libvirt.driver [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.159 2 DEBUG nova.virt.libvirt.driver [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.159 2 DEBUG nova.virt.libvirt.driver [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Oct 09 16:22:42 compute-0 podman[146092]: 2025-10-09 16:22:42.171078174 +0000 UTC m=+0.082646961 container cleanup 43deaebc7c8cb50ee92089f3e78b68b4b50312e14279ca186ba0e8a2a4e99d92 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 09 16:22:42 compute-0 systemd[1]: libpod-conmon-43deaebc7c8cb50ee92089f3e78b68b4b50312e14279ca186ba0e8a2a4e99d92.scope: Deactivated successfully.
Oct 09 16:22:42 compute-0 podman[146094]: 2025-10-09 16:22:42.184853043 +0000 UTC m=+0.084715858 container remove 43deaebc7c8cb50ee92089f3e78b68b4b50312e14279ca186ba0e8a2a4e99d92 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 09 16:22:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:42.203 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[06aee45c-ca4a-423b-81b8-722890b3a73e]: (4, ("Thu Oct  9 04:22:42 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0 (43deaebc7c8cb50ee92089f3e78b68b4b50312e14279ca186ba0e8a2a4e99d92)\n43deaebc7c8cb50ee92089f3e78b68b4b50312e14279ca186ba0e8a2a4e99d92\nThu Oct  9 04:22:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0 (43deaebc7c8cb50ee92089f3e78b68b4b50312e14279ca186ba0e8a2a4e99d92)\n43deaebc7c8cb50ee92089f3e78b68b4b50312e14279ca186ba0e8a2a4e99d92\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:22:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:42.205 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[7be7e1b8-b4bb-47c1-b28b-12650397c31b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:22:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:42.205 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:22:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:42.205 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[8a965ee5-e7b6-4357-bf6c-501a3b787d32]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:22:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:42.206 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc78b28f7-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:42 compute-0 kernel: tapc78b28f7-c0: left promiscuous mode
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:42.224 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[88886cc4-e8e9-43da-a7dd-15907500793b]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:22:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:42.248 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[5d206d18-123e-4908-9c4b-a42f4e30f097]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:22:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:42.249 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ea6706-1ec8-4397-b7a3-20c774d7b0d5]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:22:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:42.263 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[eb037a49-f4c6-450b-9001-a6d130240912]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 185558, 'reachable_time': 22580, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 146149, 'error': None, 'target': 'ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:22:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:42.264 28727 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 09 16:22:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:22:42.265 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[4481136c-7c91-406c-ba51-451680d15d68]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:22:42 compute-0 systemd[1]: run-netns-ovnmeta\x2dc78b28f7\x2dc251\x2d4d74\x2d863e\x2d8d5520acbae0.mount: Deactivated successfully.
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.273 2 DEBUG nova.virt.libvirt.guest [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '4537359c-cc98-4f5e-87c4-2410f96f0e44' (instance-0000000c) get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.273 2 INFO nova.virt.libvirt.driver [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Migration operation has completed
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.274 2 INFO nova.compute.manager [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] _post_live_migration() is started..
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.285 2 WARNING neutronclient.v2_0.client [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.285 2 WARNING neutronclient.v2_0.client [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.341 2 DEBUG nova.compute.manager [req-a2ef0c71-cd55-4c8a-aac8-5e72240bed52 req-38b788f7-8db0-473b-bb1f-41b378ab0b79 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received event network-vif-unplugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.342 2 DEBUG oslo_concurrency.lockutils [req-a2ef0c71-cd55-4c8a-aac8-5e72240bed52 req-38b788f7-8db0-473b-bb1f-41b378ab0b79 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.343 2 DEBUG oslo_concurrency.lockutils [req-a2ef0c71-cd55-4c8a-aac8-5e72240bed52 req-38b788f7-8db0-473b-bb1f-41b378ab0b79 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.343 2 DEBUG oslo_concurrency.lockutils [req-a2ef0c71-cd55-4c8a-aac8-5e72240bed52 req-38b788f7-8db0-473b-bb1f-41b378ab0b79 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.344 2 DEBUG nova.compute.manager [req-a2ef0c71-cd55-4c8a-aac8-5e72240bed52 req-38b788f7-8db0-473b-bb1f-41b378ab0b79 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] No waiting events found dispatching network-vif-unplugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.345 2 DEBUG nova.compute.manager [req-a2ef0c71-cd55-4c8a-aac8-5e72240bed52 req-38b788f7-8db0-473b-bb1f-41b378ab0b79 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received event network-vif-unplugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.799 2 DEBUG nova.network.neutron [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Activated binding for port ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.800 2 DEBUG nova.compute.manager [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "address": "fa:16:3e:cc:3c:4d", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5bf45b-d6", "ovs_interfaceid": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.801 2 DEBUG nova.virt.libvirt.vif [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:21:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-1250599539',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-1250599539',id=12,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:21:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c47758289b4c448c95f927a91d89e09f',ramdisk_id='',reservation_id='r-0ee0701h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-51576386',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-51576386-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:22:21Z,user_data=None,user_id='2ed3ac10329446d8a5aae566951cae1e',uuid=4537359c-cc98-4f5e-87c4-2410f96f0e44,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "address": "fa:16:3e:cc:3c:4d", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5bf45b-d6", "ovs_interfaceid": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.802 2 DEBUG nova.network.os_vif_util [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "address": "fa:16:3e:cc:3c:4d", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab5bf45b-d6", "ovs_interfaceid": "ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.803 2 DEBUG nova.network.os_vif_util [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:3c:4d,bridge_name='br-int',has_traffic_filtering=True,id=ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa,network=Network(c78b28f7-c251-4d74-863e-8d5520acbae0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab5bf45b-d6') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.803 2 DEBUG os_vif [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:3c:4d,bridge_name='br-int',has_traffic_filtering=True,id=ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa,network=Network(c78b28f7-c251-4d74-863e-8d5520acbae0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab5bf45b-d6') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.806 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab5bf45b-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.810 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=99d9a6b6-6206-4a52-a58f-0c9ee227159d) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.815 2 INFO os_vif [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:3c:4d,bridge_name='br-int',has_traffic_filtering=True,id=ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa,network=Network(c78b28f7-c251-4d74-863e-8d5520acbae0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab5bf45b-d6')
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.816 2 DEBUG oslo_concurrency.lockutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.816 2 DEBUG oslo_concurrency.lockutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.817 2 DEBUG oslo_concurrency.lockutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.817 2 DEBUG nova.compute.manager [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.817 2 INFO nova.virt.libvirt.driver [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Deleting instance files /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44_del
Oct 09 16:22:42 compute-0 nova_compute[117331]: 2025-10-09 16:22:42.818 2 INFO nova.virt.libvirt.driver [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Deletion of /var/lib/nova/instances/4537359c-cc98-4f5e-87c4-2410f96f0e44_del complete
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.398 2 DEBUG nova.compute.manager [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received event network-vif-plugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.398 2 DEBUG oslo_concurrency.lockutils [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.398 2 DEBUG oslo_concurrency.lockutils [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.398 2 DEBUG oslo_concurrency.lockutils [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.399 2 DEBUG nova.compute.manager [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] No waiting events found dispatching network-vif-plugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.399 2 WARNING nova.compute.manager [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received unexpected event network-vif-plugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa for instance with vm_state active and task_state migrating.
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.399 2 DEBUG nova.compute.manager [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received event network-vif-unplugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.399 2 DEBUG oslo_concurrency.lockutils [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.399 2 DEBUG oslo_concurrency.lockutils [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.399 2 DEBUG oslo_concurrency.lockutils [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.400 2 DEBUG nova.compute.manager [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] No waiting events found dispatching network-vif-unplugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.400 2 DEBUG nova.compute.manager [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received event network-vif-unplugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.400 2 DEBUG nova.compute.manager [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received event network-vif-unplugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.400 2 DEBUG oslo_concurrency.lockutils [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.400 2 DEBUG oslo_concurrency.lockutils [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.400 2 DEBUG oslo_concurrency.lockutils [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.401 2 DEBUG nova.compute.manager [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] No waiting events found dispatching network-vif-unplugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.401 2 DEBUG nova.compute.manager [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received event network-vif-unplugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.401 2 DEBUG nova.compute.manager [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received event network-vif-plugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.401 2 DEBUG oslo_concurrency.lockutils [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.401 2 DEBUG oslo_concurrency.lockutils [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.401 2 DEBUG oslo_concurrency.lockutils [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.402 2 DEBUG nova.compute.manager [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] No waiting events found dispatching network-vif-plugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.402 2 WARNING nova.compute.manager [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received unexpected event network-vif-plugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa for instance with vm_state active and task_state migrating.
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.402 2 DEBUG nova.compute.manager [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received event network-vif-plugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.402 2 DEBUG oslo_concurrency.lockutils [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.402 2 DEBUG oslo_concurrency.lockutils [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.402 2 DEBUG oslo_concurrency.lockutils [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.403 2 DEBUG nova.compute.manager [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] No waiting events found dispatching network-vif-plugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:22:44 compute-0 nova_compute[117331]: 2025-10-09 16:22:44.403 2 WARNING nova.compute.manager [req-6ecb117b-aebb-450e-854d-10834dfe020a req-2bec3ec8-eb85-43bc-987a-884777e1bc1a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Received unexpected event network-vif-plugged-ab5bf45b-d6f4-4ecf-95d5-b00418cb03fa for instance with vm_state active and task_state migrating.
Oct 09 16:22:45 compute-0 nova_compute[117331]: 2025-10-09 16:22:45.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:47 compute-0 nova_compute[117331]: 2025-10-09 16:22:47.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:50 compute-0 podman[146151]: 2025-10-09 16:22:50.871542938 +0000 UTC m=+0.094672465 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, version=9.6, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41)
Oct 09 16:22:50 compute-0 nova_compute[117331]: 2025-10-09 16:22:50.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:51 compute-0 nova_compute[117331]: 2025-10-09 16:22:51.855 2 DEBUG oslo_concurrency.lockutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:22:51 compute-0 nova_compute[117331]: 2025-10-09 16:22:51.856 2 DEBUG oslo_concurrency.lockutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:22:51 compute-0 nova_compute[117331]: 2025-10-09 16:22:51.856 2 DEBUG oslo_concurrency.lockutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "4537359c-cc98-4f5e-87c4-2410f96f0e44-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:22:52 compute-0 nova_compute[117331]: 2025-10-09 16:22:52.374 2 DEBUG oslo_concurrency.lockutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:22:52 compute-0 nova_compute[117331]: 2025-10-09 16:22:52.375 2 DEBUG oslo_concurrency.lockutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:22:52 compute-0 nova_compute[117331]: 2025-10-09 16:22:52.375 2 DEBUG oslo_concurrency.lockutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:22:52 compute-0 nova_compute[117331]: 2025-10-09 16:22:52.375 2 DEBUG nova.compute.resource_tracker [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:22:52 compute-0 podman[146172]: 2025-10-09 16:22:52.53741292 +0000 UTC m=+0.118538894 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 09 16:22:52 compute-0 nova_compute[117331]: 2025-10-09 16:22:52.579 2 WARNING nova.virt.libvirt.driver [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:22:52 compute-0 nova_compute[117331]: 2025-10-09 16:22:52.580 2 DEBUG oslo_concurrency.processutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:22:52 compute-0 nova_compute[117331]: 2025-10-09 16:22:52.600 2 DEBUG oslo_concurrency.processutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.020s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:22:52 compute-0 nova_compute[117331]: 2025-10-09 16:22:52.601 2 DEBUG nova.compute.resource_tracker [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6162MB free_disk=73.26232147216797GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:22:52 compute-0 nova_compute[117331]: 2025-10-09 16:22:52.601 2 DEBUG oslo_concurrency.lockutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:22:52 compute-0 nova_compute[117331]: 2025-10-09 16:22:52.601 2 DEBUG oslo_concurrency.lockutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:22:52 compute-0 nova_compute[117331]: 2025-10-09 16:22:52.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:53 compute-0 nova_compute[117331]: 2025-10-09 16:22:53.642 2 DEBUG nova.compute.resource_tracker [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration for instance 4537359c-cc98-4f5e-87c4-2410f96f0e44 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 09 16:22:54 compute-0 nova_compute[117331]: 2025-10-09 16:22:54.150 2 DEBUG nova.compute.resource_tracker [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Oct 09 16:22:54 compute-0 nova_compute[117331]: 2025-10-09 16:22:54.177 2 DEBUG nova.compute.resource_tracker [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration 32db2a54-1b64-43b7-85cd-77132658a2ef is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:22:54 compute-0 nova_compute[117331]: 2025-10-09 16:22:54.178 2 DEBUG nova.compute.resource_tracker [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:22:54 compute-0 nova_compute[117331]: 2025-10-09 16:22:54.178 2 DEBUG nova.compute.resource_tracker [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:22:52 up 31 min,  0 user,  load average: 0.63, 0.39, 0.32\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:22:54 compute-0 nova_compute[117331]: 2025-10-09 16:22:54.218 2 DEBUG nova.compute.provider_tree [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:22:54 compute-0 nova_compute[117331]: 2025-10-09 16:22:54.726 2 DEBUG nova.scheduler.client.report [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:22:55 compute-0 nova_compute[117331]: 2025-10-09 16:22:55.241 2 DEBUG nova.compute.resource_tracker [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:22:55 compute-0 nova_compute[117331]: 2025-10-09 16:22:55.241 2 DEBUG oslo_concurrency.lockutils [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.640s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:22:55 compute-0 nova_compute[117331]: 2025-10-09 16:22:55.258 2 INFO nova.compute.manager [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Oct 09 16:22:55 compute-0 nova_compute[117331]: 2025-10-09 16:22:55.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:56 compute-0 nova_compute[117331]: 2025-10-09 16:22:56.326 2 INFO nova.scheduler.client.report [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Deleted allocation for migration 32db2a54-1b64-43b7-85cd-77132658a2ef
Oct 09 16:22:56 compute-0 nova_compute[117331]: 2025-10-09 16:22:56.327 2 DEBUG nova.virt.libvirt.driver [None req-5c240840-8346-4b6c-b1c2-7db22559751c 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4537359c-cc98-4f5e-87c4-2410f96f0e44] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Oct 09 16:22:57 compute-0 nova_compute[117331]: 2025-10-09 16:22:57.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:22:59 compute-0 podman[127775]: time="2025-10-09T16:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:22:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:22:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3020 "" "Go-http-client/1.1"
Oct 09 16:23:00 compute-0 nova_compute[117331]: 2025-10-09 16:23:00.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:01 compute-0 openstack_network_exporter[129925]: ERROR   16:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:23:01 compute-0 openstack_network_exporter[129925]: ERROR   16:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:23:01 compute-0 openstack_network_exporter[129925]: ERROR   16:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:23:01 compute-0 openstack_network_exporter[129925]: ERROR   16:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:23:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:23:01 compute-0 openstack_network_exporter[129925]: ERROR   16:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:23:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:23:02 compute-0 nova_compute[117331]: 2025-10-09 16:23:02.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:03 compute-0 podman[146201]: 2025-10-09 16:23:03.865322601 +0000 UTC m=+0.093844178 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:23:06 compute-0 nova_compute[117331]: 2025-10-09 16:23:06.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:06.473 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:23:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:06.473 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:23:06 compute-0 nova_compute[117331]: 2025-10-09 16:23:06.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:06 compute-0 podman[146223]: 2025-10-09 16:23:06.871338819 +0000 UTC m=+0.088796978 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 09 16:23:07 compute-0 nova_compute[117331]: 2025-10-09 16:23:07.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:08.475 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:23:11 compute-0 nova_compute[117331]: 2025-10-09 16:23:11.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:12 compute-0 nova_compute[117331]: 2025-10-09 16:23:12.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:12 compute-0 podman[146250]: 2025-10-09 16:23:12.853451441 +0000 UTC m=+0.073285804 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct 09 16:23:12 compute-0 podman[146249]: 2025-10-09 16:23:12.864855444 +0000 UTC m=+0.087432264 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 09 16:23:14 compute-0 nova_compute[117331]: 2025-10-09 16:23:14.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:23:16 compute-0 nova_compute[117331]: 2025-10-09 16:23:16.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:16 compute-0 nova_compute[117331]: 2025-10-09 16:23:16.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:23:17 compute-0 nova_compute[117331]: 2025-10-09 16:23:17.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:18 compute-0 nova_compute[117331]: 2025-10-09 16:23:18.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:23:18 compute-0 nova_compute[117331]: 2025-10-09 16:23:18.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:23:19 compute-0 nova_compute[117331]: 2025-10-09 16:23:19.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:23:19 compute-0 nova_compute[117331]: 2025-10-09 16:23:19.819 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:23:19 compute-0 nova_compute[117331]: 2025-10-09 16:23:19.819 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:23:19 compute-0 nova_compute[117331]: 2025-10-09 16:23:19.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:23:19 compute-0 nova_compute[117331]: 2025-10-09 16:23:19.820 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:23:19 compute-0 nova_compute[117331]: 2025-10-09 16:23:19.955 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:23:19 compute-0 nova_compute[117331]: 2025-10-09 16:23:19.956 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:23:19 compute-0 nova_compute[117331]: 2025-10-09 16:23:19.980 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.023s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:23:19 compute-0 nova_compute[117331]: 2025-10-09 16:23:19.981 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6181MB free_disk=73.26232147216797GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:23:19 compute-0 nova_compute[117331]: 2025-10-09 16:23:19.981 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:23:19 compute-0 nova_compute[117331]: 2025-10-09 16:23:19.981 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:23:21 compute-0 nova_compute[117331]: 2025-10-09 16:23:21.025 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:23:21 compute-0 nova_compute[117331]: 2025-10-09 16:23:21.026 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:23:19 up 32 min,  0 user,  load average: 0.48, 0.38, 0.31\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:23:21 compute-0 nova_compute[117331]: 2025-10-09 16:23:21.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:21 compute-0 nova_compute[117331]: 2025-10-09 16:23:21.095 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:23:21 compute-0 nova_compute[117331]: 2025-10-09 16:23:21.602 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:23:21 compute-0 podman[146287]: 2025-10-09 16:23:21.851080382 +0000 UTC m=+0.083664704 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-type=git, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.buildah.version=1.33.7, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=minimal rhel9)
Oct 09 16:23:22 compute-0 nova_compute[117331]: 2025-10-09 16:23:22.113 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:23:22 compute-0 nova_compute[117331]: 2025-10-09 16:23:22.113 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.132s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:23:22 compute-0 nova_compute[117331]: 2025-10-09 16:23:22.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:22 compute-0 podman[146308]: 2025-10-09 16:23:22.864499648 +0000 UTC m=+0.094482068 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_controller)
Oct 09 16:23:26 compute-0 nova_compute[117331]: 2025-10-09 16:23:26.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:26 compute-0 nova_compute[117331]: 2025-10-09 16:23:26.109 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:23:26 compute-0 nova_compute[117331]: 2025-10-09 16:23:26.110 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:23:26 compute-0 nova_compute[117331]: 2025-10-09 16:23:26.110 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:23:26 compute-0 nova_compute[117331]: 2025-10-09 16:23:26.110 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:23:26 compute-0 nova_compute[117331]: 2025-10-09 16:23:26.110 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:23:26 compute-0 nova_compute[117331]: 2025-10-09 16:23:26.948 2 DEBUG nova.compute.manager [None req-f599fd60-c0b8-467e-91a4-2067ee798776 769bfcc46cd44cde8622c2a2d5e02dbc b30e8cf5e10742f190212b4cb97ce2c9 - - default default] Removing trait COMPUTE_STATUS_DISABLED from compute node resource provider 593051b8-2000-437f-a915-2616fc8b1671 in placement. update_compute_provider_status /usr/lib/python3.12/site-packages/nova/compute/manager.py:631
Oct 09 16:23:26 compute-0 nova_compute[117331]: 2025-10-09 16:23:26.990 2 DEBUG nova.compute.provider_tree [None req-f599fd60-c0b8-467e-91a4-2067ee798776 769bfcc46cd44cde8622c2a2d5e02dbc b30e8cf5e10742f190212b4cb97ce2c9 - - default default] Updating resource provider 593051b8-2000-437f-a915-2616fc8b1671 generation from 18 to 21 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Oct 09 16:23:27 compute-0 nova_compute[117331]: 2025-10-09 16:23:27.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:29 compute-0 podman[127775]: time="2025-10-09T16:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:23:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:23:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3025 "" "Go-http-client/1.1"
Oct 09 16:23:29 compute-0 nova_compute[117331]: 2025-10-09 16:23:29.973 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Acquiring lock "7531b461-59ec-4261-8f6f-125c07fbf626" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:23:29 compute-0 nova_compute[117331]: 2025-10-09 16:23:29.973 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:23:30 compute-0 nova_compute[117331]: 2025-10-09 16:23:30.478 2 DEBUG nova.compute.manager [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:23:31 compute-0 nova_compute[117331]: 2025-10-09 16:23:31.018 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:23:31 compute-0 nova_compute[117331]: 2025-10-09 16:23:31.019 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:23:31 compute-0 nova_compute[117331]: 2025-10-09 16:23:31.025 2 DEBUG nova.virt.hardware [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:23:31 compute-0 nova_compute[117331]: 2025-10-09 16:23:31.025 2 INFO nova.compute.claims [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:23:31 compute-0 nova_compute[117331]: 2025-10-09 16:23:31.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:31 compute-0 openstack_network_exporter[129925]: ERROR   16:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:23:31 compute-0 openstack_network_exporter[129925]: ERROR   16:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:23:31 compute-0 openstack_network_exporter[129925]: ERROR   16:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:23:31 compute-0 openstack_network_exporter[129925]: ERROR   16:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:23:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:23:31 compute-0 openstack_network_exporter[129925]: ERROR   16:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:23:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:23:32 compute-0 nova_compute[117331]: 2025-10-09 16:23:32.082 2 DEBUG nova.compute.provider_tree [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:23:32 compute-0 nova_compute[117331]: 2025-10-09 16:23:32.593 2 DEBUG nova.scheduler.client.report [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:23:32 compute-0 nova_compute[117331]: 2025-10-09 16:23:32.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:33 compute-0 nova_compute[117331]: 2025-10-09 16:23:33.103 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.084s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:23:33 compute-0 nova_compute[117331]: 2025-10-09 16:23:33.104 2 DEBUG nova.compute.manager [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:23:33 compute-0 nova_compute[117331]: 2025-10-09 16:23:33.615 2 DEBUG nova.compute.manager [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:23:33 compute-0 nova_compute[117331]: 2025-10-09 16:23:33.616 2 DEBUG nova.network.neutron [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:23:33 compute-0 nova_compute[117331]: 2025-10-09 16:23:33.616 2 WARNING neutronclient.v2_0.client [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:23:33 compute-0 nova_compute[117331]: 2025-10-09 16:23:33.617 2 WARNING neutronclient.v2_0.client [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:23:34 compute-0 nova_compute[117331]: 2025-10-09 16:23:34.126 2 INFO nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:23:34 compute-0 nova_compute[117331]: 2025-10-09 16:23:34.375 2 DEBUG nova.network.neutron [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Successfully created port: 15a9c820-cacb-46bc-acd6-e8ed3b08432a _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:23:34 compute-0 nova_compute[117331]: 2025-10-09 16:23:34.636 2 DEBUG nova.compute.manager [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:23:34 compute-0 podman[146334]: 2025-10-09 16:23:34.827553655 +0000 UTC m=+0.063315037 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd)
Oct 09 16:23:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:35.308 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:23:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:35.309 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:23:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:35.309 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.344 2 DEBUG nova.network.neutron [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Successfully updated port: 15a9c820-cacb-46bc-acd6-e8ed3b08432a _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.392 2 DEBUG nova.compute.manager [req-39c5acb3-0f8b-4901-9a1a-e4814b5fc60b req-9213b20c-7900-4c5b-8c1f-7a1c46f46483 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received event network-changed-15a9c820-cacb-46bc-acd6-e8ed3b08432a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.392 2 DEBUG nova.compute.manager [req-39c5acb3-0f8b-4901-9a1a-e4814b5fc60b req-9213b20c-7900-4c5b-8c1f-7a1c46f46483 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Refreshing instance network info cache due to event network-changed-15a9c820-cacb-46bc-acd6-e8ed3b08432a. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.393 2 DEBUG oslo_concurrency.lockutils [req-39c5acb3-0f8b-4901-9a1a-e4814b5fc60b req-9213b20c-7900-4c5b-8c1f-7a1c46f46483 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-7531b461-59ec-4261-8f6f-125c07fbf626" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.393 2 DEBUG oslo_concurrency.lockutils [req-39c5acb3-0f8b-4901-9a1a-e4814b5fc60b req-9213b20c-7900-4c5b-8c1f-7a1c46f46483 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-7531b461-59ec-4261-8f6f-125c07fbf626" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.393 2 DEBUG nova.network.neutron [req-39c5acb3-0f8b-4901-9a1a-e4814b5fc60b req-9213b20c-7900-4c5b-8c1f-7a1c46f46483 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Refreshing network info cache for port 15a9c820-cacb-46bc-acd6-e8ed3b08432a _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.658 2 DEBUG nova.compute.manager [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.660 2 DEBUG nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.660 2 INFO nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Creating image(s)
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.661 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Acquiring lock "/var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.661 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "/var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.662 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "/var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.663 2 DEBUG oslo_utils.imageutils.format_inspector [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.666 2 DEBUG oslo_utils.imageutils.format_inspector [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.668 2 DEBUG oslo_concurrency.processutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.761 2 DEBUG oslo_concurrency.processutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.762 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.763 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.764 2 DEBUG oslo_utils.imageutils.format_inspector [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.768 2 DEBUG oslo_utils.imageutils.format_inspector [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.769 2 DEBUG oslo_concurrency.processutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.844 2 DEBUG oslo_concurrency.processutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.845 2 DEBUG oslo_concurrency.processutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.855 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Acquiring lock "refresh_cache-7531b461-59ec-4261-8f6f-125c07fbf626" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.884 2 DEBUG oslo_concurrency.processutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk 1073741824" returned: 0 in 0.038s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.885 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.121s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.885 2 DEBUG oslo_concurrency.processutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.900 2 WARNING neutronclient.v2_0.client [req-39c5acb3-0f8b-4901-9a1a-e4814b5fc60b req-9213b20c-7900-4c5b-8c1f-7a1c46f46483 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.940 2 DEBUG oslo_concurrency.processutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.941 2 DEBUG nova.virt.disk.api [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Checking if we can resize image /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:23:35 compute-0 nova_compute[117331]: 2025-10-09 16:23:35.942 2 DEBUG oslo_concurrency.processutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:23:36 compute-0 nova_compute[117331]: 2025-10-09 16:23:36.013 2 DEBUG oslo_concurrency.processutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:23:36 compute-0 nova_compute[117331]: 2025-10-09 16:23:36.014 2 DEBUG nova.virt.disk.api [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Cannot resize image /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:23:36 compute-0 nova_compute[117331]: 2025-10-09 16:23:36.015 2 DEBUG nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:23:36 compute-0 nova_compute[117331]: 2025-10-09 16:23:36.016 2 DEBUG nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Ensure instance console log exists: /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:23:36 compute-0 nova_compute[117331]: 2025-10-09 16:23:36.017 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:23:36 compute-0 nova_compute[117331]: 2025-10-09 16:23:36.017 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:23:36 compute-0 nova_compute[117331]: 2025-10-09 16:23:36.018 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:23:36 compute-0 nova_compute[117331]: 2025-10-09 16:23:36.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:36 compute-0 nova_compute[117331]: 2025-10-09 16:23:36.303 2 DEBUG nova.network.neutron [req-39c5acb3-0f8b-4901-9a1a-e4814b5fc60b req-9213b20c-7900-4c5b-8c1f-7a1c46f46483 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:23:36 compute-0 nova_compute[117331]: 2025-10-09 16:23:36.435 2 DEBUG nova.network.neutron [req-39c5acb3-0f8b-4901-9a1a-e4814b5fc60b req-9213b20c-7900-4c5b-8c1f-7a1c46f46483 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:23:36 compute-0 nova_compute[117331]: 2025-10-09 16:23:36.944 2 DEBUG oslo_concurrency.lockutils [req-39c5acb3-0f8b-4901-9a1a-e4814b5fc60b req-9213b20c-7900-4c5b-8c1f-7a1c46f46483 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-7531b461-59ec-4261-8f6f-125c07fbf626" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:23:36 compute-0 nova_compute[117331]: 2025-10-09 16:23:36.945 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Acquired lock "refresh_cache-7531b461-59ec-4261-8f6f-125c07fbf626" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:23:36 compute-0 nova_compute[117331]: 2025-10-09 16:23:36.945 2 DEBUG nova.network.neutron [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:23:37 compute-0 nova_compute[117331]: 2025-10-09 16:23:37.655 2 DEBUG nova.network.neutron [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:23:37 compute-0 nova_compute[117331]: 2025-10-09 16:23:37.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:37 compute-0 nova_compute[117331]: 2025-10-09 16:23:37.865 2 WARNING neutronclient.v2_0.client [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:23:37 compute-0 podman[146370]: 2025-10-09 16:23:37.881036614 +0000 UTC m=+0.104280521 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.012 2 DEBUG nova.network.neutron [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Updating instance_info_cache with network_info: [{"id": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "address": "fa:16:3e:d9:5e:f1", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a9c820-ca", "ovs_interfaceid": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.518 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Releasing lock "refresh_cache-7531b461-59ec-4261-8f6f-125c07fbf626" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.519 2 DEBUG nova.compute.manager [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Instance network_info: |[{"id": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "address": "fa:16:3e:d9:5e:f1", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a9c820-ca", "ovs_interfaceid": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.523 2 DEBUG nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Start _get_guest_xml network_info=[{"id": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "address": "fa:16:3e:d9:5e:f1", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a9c820-ca", "ovs_interfaceid": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.527 2 WARNING nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.529 2 DEBUG nova.virt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteHostMaintenanceStrategy-server-721204525', uuid='7531b461-59ec-4261-8f6f-125c07fbf626'), owner=OwnerMeta(userid='2ed3ac10329446d8a5aae566951cae1e', username='tempest-TestExecuteHostMaintenanceStrategy-51576386-project-admin', projectid='c47758289b4c448c95f927a91d89e09f', projectname='tempest-TestExecuteHostMaintenanceStrategy-51576386'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "address": "fa:16:3e:d9:5e:f1", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a9c820-ca", "ovs_interfaceid": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760027018.5290856) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.534 2 DEBUG nova.virt.libvirt.host [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.535 2 DEBUG nova.virt.libvirt.host [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.537 2 DEBUG nova.virt.libvirt.host [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.538 2 DEBUG nova.virt.libvirt.host [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.538 2 DEBUG nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.539 2 DEBUG nova.virt.hardware [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.539 2 DEBUG nova.virt.hardware [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.539 2 DEBUG nova.virt.hardware [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.540 2 DEBUG nova.virt.hardware [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.540 2 DEBUG nova.virt.hardware [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.540 2 DEBUG nova.virt.hardware [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.540 2 DEBUG nova.virt.hardware [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.540 2 DEBUG nova.virt.hardware [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.541 2 DEBUG nova.virt.hardware [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.541 2 DEBUG nova.virt.hardware [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.541 2 DEBUG nova.virt.hardware [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.545 2 DEBUG nova.virt.libvirt.vif [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:23:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-721204525',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-721204525',id=14,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c47758289b4c448c95f927a91d89e09f',ramdisk_id='',reservation_id='r-bg9c54xp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-51576386',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-51576386-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:23:34Z,user_data=None,user_id='2ed3ac10329446d8a5aae566951cae1e',uuid=7531b461-59ec-4261-8f6f-125c07fbf626,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "address": "fa:16:3e:d9:5e:f1", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a9c820-ca", "ovs_interfaceid": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.546 2 DEBUG nova.network.os_vif_util [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Converting VIF {"id": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "address": "fa:16:3e:d9:5e:f1", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a9c820-ca", "ovs_interfaceid": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.546 2 DEBUG nova.network.os_vif_util [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5e:f1,bridge_name='br-int',has_traffic_filtering=True,id=15a9c820-cacb-46bc-acd6-e8ed3b08432a,network=Network(c78b28f7-c251-4d74-863e-8d5520acbae0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a9c820-ca') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:23:38 compute-0 nova_compute[117331]: 2025-10-09 16:23:38.547 2 DEBUG nova.objects.instance [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lazy-loading 'pci_devices' on Instance uuid 7531b461-59ec-4261-8f6f-125c07fbf626 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.055 2 DEBUG nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:23:39 compute-0 nova_compute[117331]:   <uuid>7531b461-59ec-4261-8f6f-125c07fbf626</uuid>
Oct 09 16:23:39 compute-0 nova_compute[117331]:   <name>instance-0000000e</name>
Oct 09 16:23:39 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:23:39 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:23:39 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-721204525</nova:name>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:23:38</nova:creationTime>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:23:39 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:23:39 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:23:39 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:23:39 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:23:39 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:23:39 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:23:39 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:23:39 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:23:39 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:23:39 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:23:39 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:23:39 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:23:39 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:23:39 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:23:39 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:23:39 compute-0 nova_compute[117331]:         <nova:user uuid="2ed3ac10329446d8a5aae566951cae1e">tempest-TestExecuteHostMaintenanceStrategy-51576386-project-admin</nova:user>
Oct 09 16:23:39 compute-0 nova_compute[117331]:         <nova:project uuid="c47758289b4c448c95f927a91d89e09f">tempest-TestExecuteHostMaintenanceStrategy-51576386</nova:project>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:23:39 compute-0 nova_compute[117331]:         <nova:port uuid="15a9c820-cacb-46bc-acd6-e8ed3b08432a">
Oct 09 16:23:39 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:23:39 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:23:39 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <system>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <entry name="serial">7531b461-59ec-4261-8f6f-125c07fbf626</entry>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <entry name="uuid">7531b461-59ec-4261-8f6f-125c07fbf626</entry>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     </system>
Oct 09 16:23:39 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:23:39 compute-0 nova_compute[117331]:   <os>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:   </os>
Oct 09 16:23:39 compute-0 nova_compute[117331]:   <features>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:   </features>
Oct 09 16:23:39 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:23:39 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:23:39 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk.config"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:d9:5e:f1"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <target dev="tap15a9c820-ca"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/console.log" append="off"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <video>
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     </video>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:23:39 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:23:39 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:23:39 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:23:39 compute-0 nova_compute[117331]: </domain>
Oct 09 16:23:39 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.057 2 DEBUG nova.compute.manager [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Preparing to wait for external event network-vif-plugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.057 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Acquiring lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.058 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.058 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.059 2 DEBUG nova.virt.libvirt.vif [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:23:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-721204525',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-721204525',id=14,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c47758289b4c448c95f927a91d89e09f',ramdisk_id='',reservation_id='r-bg9c54xp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-51576386',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-51576386-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:23:34Z,user_data=None,user_id='2ed3ac10329446d8a5aae566951cae1e',uuid=7531b461-59ec-4261-8f6f-125c07fbf626,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "address": "fa:16:3e:d9:5e:f1", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a9c820-ca", "ovs_interfaceid": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.059 2 DEBUG nova.network.os_vif_util [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Converting VIF {"id": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "address": "fa:16:3e:d9:5e:f1", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a9c820-ca", "ovs_interfaceid": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.060 2 DEBUG nova.network.os_vif_util [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5e:f1,bridge_name='br-int',has_traffic_filtering=True,id=15a9c820-cacb-46bc-acd6-e8ed3b08432a,network=Network(c78b28f7-c251-4d74-863e-8d5520acbae0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a9c820-ca') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.061 2 DEBUG os_vif [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5e:f1,bridge_name='br-int',has_traffic_filtering=True,id=15a9c820-cacb-46bc-acd6-e8ed3b08432a,network=Network(c78b28f7-c251-4d74-863e-8d5520acbae0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a9c820-ca') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.062 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.062 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.063 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '5019e5ec-0341-54bd-8538-a0274aa4684a', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.071 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15a9c820-ca, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.072 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap15a9c820-ca, col_values=(('qos', UUID('c7455d1f-1953-4afc-be1b-c0171b4d74cf')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.073 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap15a9c820-ca, col_values=(('external_ids', {'iface-id': '15a9c820-cacb-46bc-acd6-e8ed3b08432a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:5e:f1', 'vm-uuid': '7531b461-59ec-4261-8f6f-125c07fbf626'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:39 compute-0 NetworkManager[1028]: <info>  [1760027019.0759] manager: (tap15a9c820-ca): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:39 compute-0 nova_compute[117331]: 2025-10-09 16:23:39.082 2 INFO os_vif [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5e:f1,bridge_name='br-int',has_traffic_filtering=True,id=15a9c820-cacb-46bc-acd6-e8ed3b08432a,network=Network(c78b28f7-c251-4d74-863e-8d5520acbae0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a9c820-ca')
Oct 09 16:23:40 compute-0 nova_compute[117331]: 2025-10-09 16:23:40.623 2 DEBUG nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:23:40 compute-0 nova_compute[117331]: 2025-10-09 16:23:40.624 2 DEBUG nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:23:40 compute-0 nova_compute[117331]: 2025-10-09 16:23:40.624 2 DEBUG nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] No VIF found with MAC fa:16:3e:d9:5e:f1, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:23:40 compute-0 nova_compute[117331]: 2025-10-09 16:23:40.625 2 INFO nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Using config drive
Oct 09 16:23:41 compute-0 nova_compute[117331]: 2025-10-09 16:23:41.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:41 compute-0 nova_compute[117331]: 2025-10-09 16:23:41.136 2 WARNING neutronclient.v2_0.client [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:23:41 compute-0 nova_compute[117331]: 2025-10-09 16:23:41.534 2 INFO nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Creating config drive at /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk.config
Oct 09 16:23:41 compute-0 nova_compute[117331]: 2025-10-09 16:23:41.542 2 DEBUG oslo_concurrency.processutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpxus5oq6_ execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:23:41 compute-0 nova_compute[117331]: 2025-10-09 16:23:41.685 2 DEBUG oslo_concurrency.processutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpxus5oq6_" returned: 0 in 0.143s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:23:41 compute-0 kernel: tap15a9c820-ca: entered promiscuous mode
Oct 09 16:23:41 compute-0 NetworkManager[1028]: <info>  [1760027021.7744] manager: (tap15a9c820-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Oct 09 16:23:41 compute-0 ovn_controller[19752]: 2025-10-09T16:23:41Z|00130|binding|INFO|Claiming lport 15a9c820-cacb-46bc-acd6-e8ed3b08432a for this chassis.
Oct 09 16:23:41 compute-0 ovn_controller[19752]: 2025-10-09T16:23:41Z|00131|binding|INFO|15a9c820-cacb-46bc-acd6-e8ed3b08432a: Claiming fa:16:3e:d9:5e:f1 10.100.0.13
Oct 09 16:23:41 compute-0 nova_compute[117331]: 2025-10-09 16:23:41.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:41.785 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:5e:f1 10.100.0.13'], port_security=['fa:16:3e:d9:5e:f1 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7531b461-59ec-4261-8f6f-125c07fbf626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c78b28f7-c251-4d74-863e-8d5520acbae0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c47758289b4c448c95f927a91d89e09f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37ce4296-f479-41b4-95bb-85317226edc9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8cdf3869-9618-417b-be95-470341634549, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=15a9c820-cacb-46bc-acd6-e8ed3b08432a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:23:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:41.786 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 15a9c820-cacb-46bc-acd6-e8ed3b08432a in datapath c78b28f7-c251-4d74-863e-8d5520acbae0 bound to our chassis
Oct 09 16:23:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:41.787 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c78b28f7-c251-4d74-863e-8d5520acbae0
Oct 09 16:23:41 compute-0 nova_compute[117331]: 2025-10-09 16:23:41.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:41 compute-0 ovn_controller[19752]: 2025-10-09T16:23:41Z|00132|binding|INFO|Setting lport 15a9c820-cacb-46bc-acd6-e8ed3b08432a ovn-installed in OVS
Oct 09 16:23:41 compute-0 ovn_controller[19752]: 2025-10-09T16:23:41Z|00133|binding|INFO|Setting lport 15a9c820-cacb-46bc-acd6-e8ed3b08432a up in Southbound
Oct 09 16:23:41 compute-0 nova_compute[117331]: 2025-10-09 16:23:41.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:41 compute-0 nova_compute[117331]: 2025-10-09 16:23:41.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:41.803 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[e6050a14-ce65-467c-b13d-d2145b788d34]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:23:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:41.804 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc78b28f7-c1 in ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 09 16:23:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:41.805 139687 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc78b28f7-c0 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 09 16:23:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:41.805 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[29ca1a25-848d-42d4-af04-f209c8b238d4]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:23:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:41.806 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[e8df2be9-897a-4bee-b7a1-142ff3ee8634]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:23:41 compute-0 systemd-udevd[146415]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:23:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:41.820 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[1670f46e-67ed-45e5-9ac6-466e3197e819]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:23:41 compute-0 systemd-machined[77487]: New machine qemu-9-instance-0000000e.
Oct 09 16:23:41 compute-0 NetworkManager[1028]: <info>  [1760027021.8328] device (tap15a9c820-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:23:41 compute-0 NetworkManager[1028]: <info>  [1760027021.8335] device (tap15a9c820-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:23:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:41.839 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[99cf5696-4796-4c78-8bc4-611c2e8bb8c6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:23:41 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-0000000e.
Oct 09 16:23:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:41.869 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[274f282c-941e-4169-86bb-1170eb6f3228]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:23:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:41.872 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[4f4e03d1-2bfa-46cb-a167-c2f02d45036d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:23:41 compute-0 NetworkManager[1028]: <info>  [1760027021.8737] manager: (tapc78b28f7-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Oct 09 16:23:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:41.912 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[ece0fca8-a129-4e93-a470-54afaca598b2]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:23:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:41.916 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[2aaebf13-ee6b-4fe4-b8b8-5b0fbe0624a8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:23:41 compute-0 nova_compute[117331]: 2025-10-09 16:23:41.939 2 DEBUG nova.compute.manager [req-eb6ad4f2-0e96-4033-814f-3677d48c161f req-35279e4c-718e-4c78-882a-56c5a4bf9e9e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received event network-vif-plugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:23:41 compute-0 nova_compute[117331]: 2025-10-09 16:23:41.939 2 DEBUG oslo_concurrency.lockutils [req-eb6ad4f2-0e96-4033-814f-3677d48c161f req-35279e4c-718e-4c78-882a-56c5a4bf9e9e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:23:41 compute-0 nova_compute[117331]: 2025-10-09 16:23:41.940 2 DEBUG oslo_concurrency.lockutils [req-eb6ad4f2-0e96-4033-814f-3677d48c161f req-35279e4c-718e-4c78-882a-56c5a4bf9e9e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:23:41 compute-0 nova_compute[117331]: 2025-10-09 16:23:41.940 2 DEBUG oslo_concurrency.lockutils [req-eb6ad4f2-0e96-4033-814f-3677d48c161f req-35279e4c-718e-4c78-882a-56c5a4bf9e9e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:23:41 compute-0 nova_compute[117331]: 2025-10-09 16:23:41.940 2 DEBUG nova.compute.manager [req-eb6ad4f2-0e96-4033-814f-3677d48c161f req-35279e4c-718e-4c78-882a-56c5a4bf9e9e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Processing event network-vif-plugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:23:41 compute-0 NetworkManager[1028]: <info>  [1760027021.9418] device (tapc78b28f7-c0): carrier: link connected
Oct 09 16:23:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:41.946 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[1d87f074-40b7-401e-98ce-5fe5e1a23dcf]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:23:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:41.964 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[e3610331-c995-436a-a15e-48574f83ca33]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc78b28f7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:fa:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 196554, 'reachable_time': 18174, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 146447, 'error': None, 'target': 'ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:23:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:41.980 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[006d6f8d-a736-4552-a194-79c5412fc187]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:fa7f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 196554, 'tstamp': 196554}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 146448, 'error': None, 'target': 'ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:23:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:41.998 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[f81edd01-025a-471a-8950-646e7491e21a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc78b28f7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:fa:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 196554, 'reachable_time': 18174, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 146449, 'error': None, 'target': 'ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:42.027 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f8246c-b011-4582-bd2f-eac487e84048]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:42.086 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[15da2ab5-49e3-4419-9753-53c3392da50d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:42.087 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc78b28f7-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:42.087 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:42.087 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc78b28f7-c0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:23:42 compute-0 nova_compute[117331]: 2025-10-09 16:23:42.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:42 compute-0 NetworkManager[1028]: <info>  [1760027022.0896] manager: (tapc78b28f7-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Oct 09 16:23:42 compute-0 kernel: tapc78b28f7-c0: entered promiscuous mode
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:42.093 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc78b28f7-c0, col_values=(('external_ids', {'iface-id': 'ba8bb1df-8aa6-40e9-83d3-fc3264c1941e'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:23:42 compute-0 ovn_controller[19752]: 2025-10-09T16:23:42Z|00134|binding|INFO|Releasing lport ba8bb1df-8aa6-40e9-83d3-fc3264c1941e from this chassis (sb_readonly=0)
Oct 09 16:23:42 compute-0 nova_compute[117331]: 2025-10-09 16:23:42.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:42 compute-0 nova_compute[117331]: 2025-10-09 16:23:42.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:42.107 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a7cc6391-1c15-4172-b03b-e75207fe145f]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:42.108 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:42.108 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:42.108 28613 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for c78b28f7-c251-4d74-863e-8d5520acbae0 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:42.108 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:23:42 compute-0 nova_compute[117331]: 2025-10-09 16:23:42.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:42.109 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[63fe5478-1578-4629-9e82-f3240080bb8a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:42.110 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:42.110 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ad47447d-c98f-4f6b-857d-2f545e005d62]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:42.111 28613 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: global
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     log         /dev/log local0 debug
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     log-tag     haproxy-metadata-proxy-c78b28f7-c251-4d74-863e-8d5520acbae0
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     user        root
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     group       root
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     maxconn     1024
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     pidfile     /var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     daemon
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: defaults
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     log global
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     mode http
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     option httplog
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     option dontlognull
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     option http-server-close
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     option forwardfor
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     retries                 3
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     timeout http-request    30s
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     timeout connect         30s
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     timeout client          32s
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     timeout server          32s
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     timeout http-keep-alive 30s
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: listen listener
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     bind 169.254.169.254:80
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:     http-request add-header X-OVN-Network-ID c78b28f7-c251-4d74-863e-8d5520acbae0
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 09 16:23:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:23:42.111 28613 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0', 'env', 'PROCESS_TAG=haproxy-c78b28f7-c251-4d74-863e-8d5520acbae0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c78b28f7-c251-4d74-863e-8d5520acbae0.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 09 16:23:42 compute-0 podman[146488]: 2025-10-09 16:23:42.528552618 +0000 UTC m=+0.066467796 container create 45cffa2ae58e30a7e8519d9d4471288a24e5df1427812c1750aff4cb76c6fa57 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4)
Oct 09 16:23:42 compute-0 systemd[1]: Started libpod-conmon-45cffa2ae58e30a7e8519d9d4471288a24e5df1427812c1750aff4cb76c6fa57.scope.
Oct 09 16:23:42 compute-0 podman[146488]: 2025-10-09 16:23:42.497800429 +0000 UTC m=+0.035715647 image pull 4e15c189a95c5dcd1203c76fe304de5c8f8616da9a258ca168cc54a517f49b87 38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 09 16:23:42 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:23:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829ae67fdae167e34f2ba75540a5656fb5b648325b24a954f82696d9bf6ee1bc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 16:23:42 compute-0 podman[146488]: 2025-10-09 16:23:42.625115851 +0000 UTC m=+0.163031119 container init 45cffa2ae58e30a7e8519d9d4471288a24e5df1427812c1750aff4cb76c6fa57 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Oct 09 16:23:42 compute-0 nova_compute[117331]: 2025-10-09 16:23:42.635 2 DEBUG nova.compute.manager [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:23:42 compute-0 podman[146488]: 2025-10-09 16:23:42.638163537 +0000 UTC m=+0.176078765 container start 45cffa2ae58e30a7e8519d9d4471288a24e5df1427812c1750aff4cb76c6fa57 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 16:23:42 compute-0 nova_compute[117331]: 2025-10-09 16:23:42.642 2 DEBUG nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:23:42 compute-0 nova_compute[117331]: 2025-10-09 16:23:42.647 2 INFO nova.virt.libvirt.driver [-] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Instance spawned successfully.
Oct 09 16:23:42 compute-0 nova_compute[117331]: 2025-10-09 16:23:42.648 2 DEBUG nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:23:42 compute-0 neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0[146504]: [NOTICE]   (146508) : New worker (146510) forked
Oct 09 16:23:42 compute-0 neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0[146504]: [NOTICE]   (146508) : Loading success.
Oct 09 16:23:43 compute-0 nova_compute[117331]: 2025-10-09 16:23:43.163 2 DEBUG nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:23:43 compute-0 nova_compute[117331]: 2025-10-09 16:23:43.164 2 DEBUG nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:23:43 compute-0 nova_compute[117331]: 2025-10-09 16:23:43.164 2 DEBUG nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:23:43 compute-0 nova_compute[117331]: 2025-10-09 16:23:43.165 2 DEBUG nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:23:43 compute-0 nova_compute[117331]: 2025-10-09 16:23:43.165 2 DEBUG nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:23:43 compute-0 nova_compute[117331]: 2025-10-09 16:23:43.166 2 DEBUG nova.virt.libvirt.driver [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:23:43 compute-0 nova_compute[117331]: 2025-10-09 16:23:43.676 2 INFO nova.compute.manager [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Took 8.02 seconds to spawn the instance on the hypervisor.
Oct 09 16:23:43 compute-0 nova_compute[117331]: 2025-10-09 16:23:43.677 2 DEBUG nova.compute.manager [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:23:43 compute-0 podman[146520]: 2025-10-09 16:23:43.855856094 +0000 UTC m=+0.078891212 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Oct 09 16:23:43 compute-0 podman[146519]: 2025-10-09 16:23:43.860956006 +0000 UTC m=+0.084490600 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 09 16:23:44 compute-0 nova_compute[117331]: 2025-10-09 16:23:43.999 2 DEBUG nova.compute.manager [req-ae0282f9-4f3d-4d91-b176-2063e2e65dd9 req-d3298ab6-4d89-47a2-8458-a2e3ba5c1882 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received event network-vif-plugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:23:44 compute-0 nova_compute[117331]: 2025-10-09 16:23:43.999 2 DEBUG oslo_concurrency.lockutils [req-ae0282f9-4f3d-4d91-b176-2063e2e65dd9 req-d3298ab6-4d89-47a2-8458-a2e3ba5c1882 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:23:44 compute-0 nova_compute[117331]: 2025-10-09 16:23:44.000 2 DEBUG oslo_concurrency.lockutils [req-ae0282f9-4f3d-4d91-b176-2063e2e65dd9 req-d3298ab6-4d89-47a2-8458-a2e3ba5c1882 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:23:44 compute-0 nova_compute[117331]: 2025-10-09 16:23:44.000 2 DEBUG oslo_concurrency.lockutils [req-ae0282f9-4f3d-4d91-b176-2063e2e65dd9 req-d3298ab6-4d89-47a2-8458-a2e3ba5c1882 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:23:44 compute-0 nova_compute[117331]: 2025-10-09 16:23:44.001 2 DEBUG nova.compute.manager [req-ae0282f9-4f3d-4d91-b176-2063e2e65dd9 req-d3298ab6-4d89-47a2-8458-a2e3ba5c1882 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] No waiting events found dispatching network-vif-plugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:23:44 compute-0 nova_compute[117331]: 2025-10-09 16:23:44.001 2 WARNING nova.compute.manager [req-ae0282f9-4f3d-4d91-b176-2063e2e65dd9 req-d3298ab6-4d89-47a2-8458-a2e3ba5c1882 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received unexpected event network-vif-plugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a for instance with vm_state active and task_state None.
Oct 09 16:23:44 compute-0 nova_compute[117331]: 2025-10-09 16:23:44.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:44 compute-0 nova_compute[117331]: 2025-10-09 16:23:44.220 2 INFO nova.compute.manager [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Took 13.23 seconds to build instance.
Oct 09 16:23:44 compute-0 nova_compute[117331]: 2025-10-09 16:23:44.725 2 DEBUG oslo_concurrency.lockutils [None req-52699a02-a5fe-4aa1-a194-537b5ec0b479 2ed3ac10329446d8a5aae566951cae1e c47758289b4c448c95f927a91d89e09f - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.752s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:23:46 compute-0 nova_compute[117331]: 2025-10-09 16:23:46.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:49 compute-0 nova_compute[117331]: 2025-10-09 16:23:49.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:51 compute-0 nova_compute[117331]: 2025-10-09 16:23:51.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:52 compute-0 podman[146562]: 2025-10-09 16:23:52.846883254 +0000 UTC m=+0.077071793 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, version=9.6, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Oct 09 16:23:53 compute-0 ovn_controller[19752]: 2025-10-09T16:23:53Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:5e:f1 10.100.0.13
Oct 09 16:23:53 compute-0 ovn_controller[19752]: 2025-10-09T16:23:53Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:5e:f1 10.100.0.13
Oct 09 16:23:53 compute-0 podman[146592]: 2025-10-09 16:23:53.874324766 +0000 UTC m=+0.101306295 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct 09 16:23:54 compute-0 nova_compute[117331]: 2025-10-09 16:23:54.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:56 compute-0 nova_compute[117331]: 2025-10-09 16:23:56.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:59 compute-0 nova_compute[117331]: 2025-10-09 16:23:59.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:23:59 compute-0 podman[127775]: time="2025-10-09T16:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:23:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:23:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3486 "" "Go-http-client/1.1"
Oct 09 16:24:01 compute-0 nova_compute[117331]: 2025-10-09 16:24:01.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:01 compute-0 openstack_network_exporter[129925]: ERROR   16:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:24:01 compute-0 openstack_network_exporter[129925]: ERROR   16:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:24:01 compute-0 openstack_network_exporter[129925]: ERROR   16:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:24:01 compute-0 openstack_network_exporter[129925]: ERROR   16:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:24:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:24:01 compute-0 openstack_network_exporter[129925]: ERROR   16:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:24:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:24:04 compute-0 nova_compute[117331]: 2025-10-09 16:24:04.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:05 compute-0 podman[146620]: 2025-10-09 16:24:05.838576282 +0000 UTC m=+0.061888751 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:24:06 compute-0 nova_compute[117331]: 2025-10-09 16:24:06.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:08 compute-0 podman[146640]: 2025-10-09 16:24:08.856637383 +0000 UTC m=+0.068968947 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 09 16:24:09 compute-0 nova_compute[117331]: 2025-10-09 16:24:09.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:11 compute-0 nova_compute[117331]: 2025-10-09 16:24:11.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:12 compute-0 nova_compute[117331]: 2025-10-09 16:24:12.379 2 DEBUG nova.compute.manager [None req-6ad35c04-8823-434a-b614-2227f76af166 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Adding trait COMPUTE_STATUS_DISABLED to compute node resource provider 593051b8-2000-437f-a915-2616fc8b1671 in placement. update_compute_provider_status /usr/lib/python3.12/site-packages/nova/compute/manager.py:635
Oct 09 16:24:12 compute-0 nova_compute[117331]: 2025-10-09 16:24:12.466 2 DEBUG nova.compute.provider_tree [None req-6ad35c04-8823-434a-b614-2227f76af166 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Updating resource provider 593051b8-2000-437f-a915-2616fc8b1671 generation from 21 to 23 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Oct 09 16:24:14 compute-0 nova_compute[117331]: 2025-10-09 16:24:14.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:14 compute-0 podman[146664]: 2025-10-09 16:24:14.842467203 +0000 UTC m=+0.069769562 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 09 16:24:14 compute-0 podman[146665]: 2025-10-09 16:24:14.861325344 +0000 UTC m=+0.076546418 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct 09 16:24:15 compute-0 nova_compute[117331]: 2025-10-09 16:24:15.814 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:24:16 compute-0 nova_compute[117331]: 2025-10-09 16:24:16.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:18 compute-0 nova_compute[117331]: 2025-10-09 16:24:18.310 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:24:19 compute-0 nova_compute[117331]: 2025-10-09 16:24:19.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:20 compute-0 nova_compute[117331]: 2025-10-09 16:24:20.298 2 DEBUG nova.virt.libvirt.driver [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Check if temp file /var/lib/nova/instances/tmpy9e9kj6p exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Oct 09 16:24:20 compute-0 nova_compute[117331]: 2025-10-09 16:24:20.303 2 DEBUG nova.compute.manager [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpy9e9kj6p',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='7531b461-59ec-4261-8f6f-125c07fbf626',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Oct 09 16:24:20 compute-0 nova_compute[117331]: 2025-10-09 16:24:20.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:24:20 compute-0 nova_compute[117331]: 2025-10-09 16:24:20.306 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:24:20 compute-0 nova_compute[117331]: 2025-10-09 16:24:20.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:24:20 compute-0 nova_compute[117331]: 2025-10-09 16:24:20.824 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:24:20 compute-0 nova_compute[117331]: 2025-10-09 16:24:20.825 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:24:20 compute-0 nova_compute[117331]: 2025-10-09 16:24:20.825 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:24:20 compute-0 nova_compute[117331]: 2025-10-09 16:24:20.825 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:24:21 compute-0 nova_compute[117331]: 2025-10-09 16:24:21.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:21 compute-0 nova_compute[117331]: 2025-10-09 16:24:21.868 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:24:21 compute-0 nova_compute[117331]: 2025-10-09 16:24:21.930 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:24:21 compute-0 nova_compute[117331]: 2025-10-09 16:24:21.931 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:24:22 compute-0 nova_compute[117331]: 2025-10-09 16:24:22.009 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:24:22 compute-0 nova_compute[117331]: 2025-10-09 16:24:22.147 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:24:22 compute-0 nova_compute[117331]: 2025-10-09 16:24:22.148 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:24:22 compute-0 nova_compute[117331]: 2025-10-09 16:24:22.165 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.017s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:24:22 compute-0 nova_compute[117331]: 2025-10-09 16:24:22.166 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6003MB free_disk=73.22858428955078GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:24:22 compute-0 nova_compute[117331]: 2025-10-09 16:24:22.166 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:24:22 compute-0 nova_compute[117331]: 2025-10-09 16:24:22.166 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:24:23 compute-0 nova_compute[117331]: 2025-10-09 16:24:23.188 2 INFO nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Updating resource usage from migration d7553f96-341c-40c8-9725-099a4b3f8767
Oct 09 16:24:23 compute-0 nova_compute[117331]: 2025-10-09 16:24:23.215 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Migration d7553f96-341c-40c8-9725-099a4b3f8767 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:24:23 compute-0 nova_compute[117331]: 2025-10-09 16:24:23.216 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:24:23 compute-0 nova_compute[117331]: 2025-10-09 16:24:23.216 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:24:22 up 33 min,  0 user,  load average: 0.82, 0.52, 0.37\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_migrating': '1', 'num_os_type_None': '1', 'num_proj_c47758289b4c448c95f927a91d89e09f': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:24:23 compute-0 nova_compute[117331]: 2025-10-09 16:24:23.278 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:24:23 compute-0 nova_compute[117331]: 2025-10-09 16:24:23.784 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:24:23 compute-0 podman[146711]: 2025-10-09 16:24:23.906681824 +0000 UTC m=+0.119535686 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, version=9.6, container_name=openstack_network_exporter, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 09 16:24:24 compute-0 podman[146734]: 2025-10-09 16:24:24.059762806 +0000 UTC m=+0.115162476 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 16:24:24 compute-0 nova_compute[117331]: 2025-10-09 16:24:24.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:24 compute-0 nova_compute[117331]: 2025-10-09 16:24:24.293 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:24:24 compute-0 nova_compute[117331]: 2025-10-09 16:24:24.293 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.127s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:24:24 compute-0 nova_compute[117331]: 2025-10-09 16:24:24.500 2 DEBUG oslo_concurrency.processutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:24:24 compute-0 nova_compute[117331]: 2025-10-09 16:24:24.557 2 DEBUG oslo_concurrency.processutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:24:24 compute-0 nova_compute[117331]: 2025-10-09 16:24:24.558 2 DEBUG oslo_concurrency.processutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:24:24 compute-0 nova_compute[117331]: 2025-10-09 16:24:24.619 2 DEBUG oslo_concurrency.processutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:24:24 compute-0 nova_compute[117331]: 2025-10-09 16:24:24.622 2 DEBUG nova.compute.manager [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Preparing to wait for external event network-vif-plugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:24:24 compute-0 nova_compute[117331]: 2025-10-09 16:24:24.622 2 DEBUG oslo_concurrency.lockutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:24:24 compute-0 nova_compute[117331]: 2025-10-09 16:24:24.623 2 DEBUG oslo_concurrency.lockutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:24:24 compute-0 nova_compute[117331]: 2025-10-09 16:24:24.624 2 DEBUG oslo_concurrency.lockutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:24:26 compute-0 nova_compute[117331]: 2025-10-09 16:24:26.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:27 compute-0 nova_compute[117331]: 2025-10-09 16:24:27.289 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:24:27 compute-0 nova_compute[117331]: 2025-10-09 16:24:27.290 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:24:27 compute-0 nova_compute[117331]: 2025-10-09 16:24:27.804 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:24:27 compute-0 nova_compute[117331]: 2025-10-09 16:24:27.804 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:24:27 compute-0 nova_compute[117331]: 2025-10-09 16:24:27.804 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:24:29 compute-0 nova_compute[117331]: 2025-10-09 16:24:29.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:29 compute-0 podman[127775]: time="2025-10-09T16:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:24:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:24:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3490 "" "Go-http-client/1.1"
Oct 09 16:24:31 compute-0 nova_compute[117331]: 2025-10-09 16:24:31.150 2 DEBUG nova.compute.manager [req-98e44158-3abc-40fd-a1c1-99a11f56f7f7 req-95545f69-bfbb-4a0d-b891-b63959c69dea ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received event network-vif-unplugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:24:31 compute-0 nova_compute[117331]: 2025-10-09 16:24:31.151 2 DEBUG oslo_concurrency.lockutils [req-98e44158-3abc-40fd-a1c1-99a11f56f7f7 req-95545f69-bfbb-4a0d-b891-b63959c69dea ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:24:31 compute-0 nova_compute[117331]: 2025-10-09 16:24:31.152 2 DEBUG oslo_concurrency.lockutils [req-98e44158-3abc-40fd-a1c1-99a11f56f7f7 req-95545f69-bfbb-4a0d-b891-b63959c69dea ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:24:31 compute-0 nova_compute[117331]: 2025-10-09 16:24:31.153 2 DEBUG oslo_concurrency.lockutils [req-98e44158-3abc-40fd-a1c1-99a11f56f7f7 req-95545f69-bfbb-4a0d-b891-b63959c69dea ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:24:31 compute-0 nova_compute[117331]: 2025-10-09 16:24:31.153 2 DEBUG nova.compute.manager [req-98e44158-3abc-40fd-a1c1-99a11f56f7f7 req-95545f69-bfbb-4a0d-b891-b63959c69dea ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] No event matching network-vif-unplugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a in dict_keys([('network-vif-plugged', '15a9c820-cacb-46bc-acd6-e8ed3b08432a')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Oct 09 16:24:31 compute-0 nova_compute[117331]: 2025-10-09 16:24:31.154 2 DEBUG nova.compute.manager [req-98e44158-3abc-40fd-a1c1-99a11f56f7f7 req-95545f69-bfbb-4a0d-b891-b63959c69dea ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received event network-vif-unplugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:24:31 compute-0 nova_compute[117331]: 2025-10-09 16:24:31.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:31 compute-0 openstack_network_exporter[129925]: ERROR   16:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:24:31 compute-0 openstack_network_exporter[129925]: ERROR   16:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:24:31 compute-0 openstack_network_exporter[129925]: ERROR   16:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:24:31 compute-0 openstack_network_exporter[129925]: ERROR   16:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:24:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:24:31 compute-0 openstack_network_exporter[129925]: ERROR   16:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:24:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:24:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:31.473 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:24:31 compute-0 nova_compute[117331]: 2025-10-09 16:24:31.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:31.475 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:24:31 compute-0 ovn_controller[19752]: 2025-10-09T16:24:31Z|00135|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 09 16:24:32 compute-0 nova_compute[117331]: 2025-10-09 16:24:32.651 2 INFO nova.compute.manager [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Took 8.03 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Oct 09 16:24:33 compute-0 nova_compute[117331]: 2025-10-09 16:24:33.203 2 DEBUG nova.compute.manager [req-5ed6130c-1981-4cde-a0d6-e03fb95bfb2a req-3c1116e2-e592-41ee-8ba1-2e0527b21230 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received event network-vif-plugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:24:33 compute-0 nova_compute[117331]: 2025-10-09 16:24:33.204 2 DEBUG oslo_concurrency.lockutils [req-5ed6130c-1981-4cde-a0d6-e03fb95bfb2a req-3c1116e2-e592-41ee-8ba1-2e0527b21230 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:24:33 compute-0 nova_compute[117331]: 2025-10-09 16:24:33.204 2 DEBUG oslo_concurrency.lockutils [req-5ed6130c-1981-4cde-a0d6-e03fb95bfb2a req-3c1116e2-e592-41ee-8ba1-2e0527b21230 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:24:33 compute-0 nova_compute[117331]: 2025-10-09 16:24:33.205 2 DEBUG oslo_concurrency.lockutils [req-5ed6130c-1981-4cde-a0d6-e03fb95bfb2a req-3c1116e2-e592-41ee-8ba1-2e0527b21230 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:24:33 compute-0 nova_compute[117331]: 2025-10-09 16:24:33.205 2 DEBUG nova.compute.manager [req-5ed6130c-1981-4cde-a0d6-e03fb95bfb2a req-3c1116e2-e592-41ee-8ba1-2e0527b21230 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Processing event network-vif-plugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:24:33 compute-0 nova_compute[117331]: 2025-10-09 16:24:33.206 2 DEBUG nova.compute.manager [req-5ed6130c-1981-4cde-a0d6-e03fb95bfb2a req-3c1116e2-e592-41ee-8ba1-2e0527b21230 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received event network-changed-15a9c820-cacb-46bc-acd6-e8ed3b08432a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:24:33 compute-0 nova_compute[117331]: 2025-10-09 16:24:33.206 2 DEBUG nova.compute.manager [req-5ed6130c-1981-4cde-a0d6-e03fb95bfb2a req-3c1116e2-e592-41ee-8ba1-2e0527b21230 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Refreshing instance network info cache due to event network-changed-15a9c820-cacb-46bc-acd6-e8ed3b08432a. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:24:33 compute-0 nova_compute[117331]: 2025-10-09 16:24:33.207 2 DEBUG oslo_concurrency.lockutils [req-5ed6130c-1981-4cde-a0d6-e03fb95bfb2a req-3c1116e2-e592-41ee-8ba1-2e0527b21230 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-7531b461-59ec-4261-8f6f-125c07fbf626" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:24:33 compute-0 nova_compute[117331]: 2025-10-09 16:24:33.207 2 DEBUG oslo_concurrency.lockutils [req-5ed6130c-1981-4cde-a0d6-e03fb95bfb2a req-3c1116e2-e592-41ee-8ba1-2e0527b21230 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-7531b461-59ec-4261-8f6f-125c07fbf626" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:24:33 compute-0 nova_compute[117331]: 2025-10-09 16:24:33.207 2 DEBUG nova.network.neutron [req-5ed6130c-1981-4cde-a0d6-e03fb95bfb2a req-3c1116e2-e592-41ee-8ba1-2e0527b21230 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Refreshing network info cache for port 15a9c820-cacb-46bc-acd6-e8ed3b08432a _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:24:33 compute-0 nova_compute[117331]: 2025-10-09 16:24:33.209 2 DEBUG nova.compute.manager [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:24:33 compute-0 nova_compute[117331]: 2025-10-09 16:24:33.716 2 WARNING neutronclient.v2_0.client [req-5ed6130c-1981-4cde-a0d6-e03fb95bfb2a req-3c1116e2-e592-41ee-8ba1-2e0527b21230 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:24:33 compute-0 nova_compute[117331]: 2025-10-09 16:24:33.720 2 DEBUG nova.compute.manager [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpy9e9kj6p',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='7531b461-59ec-4261-8f6f-125c07fbf626',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(d7553f96-341c-40c8-9725-099a4b3f8767),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Oct 09 16:24:34 compute-0 nova_compute[117331]: 2025-10-09 16:24:34.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:34 compute-0 nova_compute[117331]: 2025-10-09 16:24:34.210 2 WARNING neutronclient.v2_0.client [req-5ed6130c-1981-4cde-a0d6-e03fb95bfb2a req-3c1116e2-e592-41ee-8ba1-2e0527b21230 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:24:34 compute-0 nova_compute[117331]: 2025-10-09 16:24:34.260 2 DEBUG nova.objects.instance [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'migration_context' on Instance uuid 7531b461-59ec-4261-8f6f-125c07fbf626 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:24:34 compute-0 nova_compute[117331]: 2025-10-09 16:24:34.261 2 DEBUG nova.virt.libvirt.driver [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Oct 09 16:24:34 compute-0 nova_compute[117331]: 2025-10-09 16:24:34.262 2 DEBUG nova.virt.libvirt.driver [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:24:34 compute-0 nova_compute[117331]: 2025-10-09 16:24:34.263 2 DEBUG nova.virt.libvirt.driver [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:24:34 compute-0 nova_compute[117331]: 2025-10-09 16:24:34.342 2 DEBUG nova.network.neutron [req-5ed6130c-1981-4cde-a0d6-e03fb95bfb2a req-3c1116e2-e592-41ee-8ba1-2e0527b21230 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Updated VIF entry in instance network info cache for port 15a9c820-cacb-46bc-acd6-e8ed3b08432a. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Oct 09 16:24:34 compute-0 nova_compute[117331]: 2025-10-09 16:24:34.343 2 DEBUG nova.network.neutron [req-5ed6130c-1981-4cde-a0d6-e03fb95bfb2a req-3c1116e2-e592-41ee-8ba1-2e0527b21230 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Updating instance_info_cache with network_info: [{"id": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "address": "fa:16:3e:d9:5e:f1", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a9c820-ca", "ovs_interfaceid": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:24:34 compute-0 nova_compute[117331]: 2025-10-09 16:24:34.766 2 DEBUG nova.virt.libvirt.driver [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:24:34 compute-0 nova_compute[117331]: 2025-10-09 16:24:34.766 2 DEBUG nova.virt.libvirt.driver [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:24:34 compute-0 nova_compute[117331]: 2025-10-09 16:24:34.773 2 DEBUG nova.virt.libvirt.vif [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:23:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-721204525',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-721204525',id=14,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:23:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c47758289b4c448c95f927a91d89e09f',ramdisk_id='',reservation_id='r-bg9c54xp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-51576386',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-51576386-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:23:43Z,user_data=None,user_id='2ed3ac10329446d8a5aae566951cae1e',uuid=7531b461-59ec-4261-8f6f-125c07fbf626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "address": "fa:16:3e:d9:5e:f1", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap15a9c820-ca", "ovs_interfaceid": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:24:34 compute-0 nova_compute[117331]: 2025-10-09 16:24:34.774 2 DEBUG nova.network.os_vif_util [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "address": "fa:16:3e:d9:5e:f1", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap15a9c820-ca", "ovs_interfaceid": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:24:34 compute-0 nova_compute[117331]: 2025-10-09 16:24:34.775 2 DEBUG nova.network.os_vif_util [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5e:f1,bridge_name='br-int',has_traffic_filtering=True,id=15a9c820-cacb-46bc-acd6-e8ed3b08432a,network=Network(c78b28f7-c251-4d74-863e-8d5520acbae0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a9c820-ca') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:24:34 compute-0 nova_compute[117331]: 2025-10-09 16:24:34.776 2 DEBUG nova.virt.libvirt.migration [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Updating guest XML with vif config: <interface type="ethernet">
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <mac address="fa:16:3e:d9:5e:f1"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <model type="virtio"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <mtu size="1442"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <target dev="tap15a9c820-ca"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]: </interface>
Oct 09 16:24:34 compute-0 nova_compute[117331]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Oct 09 16:24:34 compute-0 nova_compute[117331]: 2025-10-09 16:24:34.777 2 DEBUG nova.virt.libvirt.migration [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <name>instance-0000000e</name>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <uuid>7531b461-59ec-4261-8f6f-125c07fbf626</uuid>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-721204525</nova:name>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:23:38</nova:creationTime>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:24:34 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:24:34 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:user uuid="2ed3ac10329446d8a5aae566951cae1e">tempest-TestExecuteHostMaintenanceStrategy-51576386-project-admin</nova:user>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:project uuid="c47758289b4c448c95f927a91d89e09f">tempest-TestExecuteHostMaintenanceStrategy-51576386</nova:project>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:port uuid="15a9c820-cacb-46bc-acd6-e8ed3b08432a">
Oct 09 16:24:34 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <system>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <entry name="serial">7531b461-59ec-4261-8f6f-125c07fbf626</entry>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <entry name="uuid">7531b461-59ec-4261-8f6f-125c07fbf626</entry>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </system>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <os>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </os>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <features>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </features>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk.config"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:d9:5e:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap15a9c820-ca"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/console.log" append="off"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       </target>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/console.log" append="off"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </console>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </input>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <video>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </video>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]: </domain>
Oct 09 16:24:34 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Oct 09 16:24:34 compute-0 nova_compute[117331]: 2025-10-09 16:24:34.778 2 DEBUG nova.virt.libvirt.migration [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <name>instance-0000000e</name>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <uuid>7531b461-59ec-4261-8f6f-125c07fbf626</uuid>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-721204525</nova:name>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:23:38</nova:creationTime>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:24:34 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:24:34 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:user uuid="2ed3ac10329446d8a5aae566951cae1e">tempest-TestExecuteHostMaintenanceStrategy-51576386-project-admin</nova:user>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:project uuid="c47758289b4c448c95f927a91d89e09f">tempest-TestExecuteHostMaintenanceStrategy-51576386</nova:project>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:port uuid="15a9c820-cacb-46bc-acd6-e8ed3b08432a">
Oct 09 16:24:34 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <system>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <entry name="serial">7531b461-59ec-4261-8f6f-125c07fbf626</entry>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <entry name="uuid">7531b461-59ec-4261-8f6f-125c07fbf626</entry>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </system>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <os>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </os>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <features>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </features>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk.config"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:d9:5e:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap15a9c820-ca"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/console.log" append="off"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       </target>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/console.log" append="off"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </console>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </input>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <video>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </video>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]: </domain>
Oct 09 16:24:34 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Oct 09 16:24:34 compute-0 nova_compute[117331]: 2025-10-09 16:24:34.779 2 DEBUG nova.virt.libvirt.migration [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _update_pci_xml output xml=<domain type="kvm">
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <name>instance-0000000e</name>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <uuid>7531b461-59ec-4261-8f6f-125c07fbf626</uuid>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-721204525</nova:name>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:23:38</nova:creationTime>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:24:34 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:24:34 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:user uuid="2ed3ac10329446d8a5aae566951cae1e">tempest-TestExecuteHostMaintenanceStrategy-51576386-project-admin</nova:user>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:project uuid="c47758289b4c448c95f927a91d89e09f">tempest-TestExecuteHostMaintenanceStrategy-51576386</nova:project>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <nova:port uuid="15a9c820-cacb-46bc-acd6-e8ed3b08432a">
Oct 09 16:24:34 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <system>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <entry name="serial">7531b461-59ec-4261-8f6f-125c07fbf626</entry>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <entry name="uuid">7531b461-59ec-4261-8f6f-125c07fbf626</entry>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </system>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <os>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </os>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <features>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </features>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/disk.config"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:d9:5e:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap15a9c820-ca"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/console.log" append="off"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:24:34 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       </target>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626/console.log" append="off"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </console>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </input>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <video>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </video>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:24:34 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:24:34 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:24:34 compute-0 nova_compute[117331]: </domain>
Oct 09 16:24:34 compute-0 nova_compute[117331]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Oct 09 16:24:34 compute-0 nova_compute[117331]: 2025-10-09 16:24:34.780 2 DEBUG nova.virt.libvirt.driver [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Oct 09 16:24:34 compute-0 nova_compute[117331]: 2025-10-09 16:24:34.850 2 DEBUG oslo_concurrency.lockutils [req-5ed6130c-1981-4cde-a0d6-e03fb95bfb2a req-3c1116e2-e592-41ee-8ba1-2e0527b21230 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-7531b461-59ec-4261-8f6f-125c07fbf626" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:24:35 compute-0 nova_compute[117331]: 2025-10-09 16:24:35.271 2 DEBUG nova.virt.libvirt.migration [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Current None elapsed 1 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:24:35 compute-0 nova_compute[117331]: 2025-10-09 16:24:35.272 2 INFO nova.virt.libvirt.migration [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Increasing downtime to 50 ms after 0 sec elapsed time
Oct 09 16:24:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:35.310 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:24:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:35.310 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:24:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:35.311 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:24:36 compute-0 nova_compute[117331]: 2025-10-09 16:24:36.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:36 compute-0 nova_compute[117331]: 2025-10-09 16:24:36.288 2 INFO nova.virt.libvirt.driver [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Oct 09 16:24:36 compute-0 kernel: tap15a9c820-ca (unregistering): left promiscuous mode
Oct 09 16:24:36 compute-0 NetworkManager[1028]: <info>  [1760027076.7951] device (tap15a9c820-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:24:36 compute-0 nova_compute[117331]: 2025-10-09 16:24:36.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:36 compute-0 nova_compute[117331]: 2025-10-09 16:24:36.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:36 compute-0 ovn_controller[19752]: 2025-10-09T16:24:36Z|00136|binding|INFO|Releasing lport 15a9c820-cacb-46bc-acd6-e8ed3b08432a from this chassis (sb_readonly=0)
Oct 09 16:24:36 compute-0 ovn_controller[19752]: 2025-10-09T16:24:36Z|00137|binding|INFO|Setting lport 15a9c820-cacb-46bc-acd6-e8ed3b08432a down in Southbound
Oct 09 16:24:36 compute-0 ovn_controller[19752]: 2025-10-09T16:24:36Z|00138|binding|INFO|Removing iface tap15a9c820-ca ovn-installed in OVS
Oct 09 16:24:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:36.812 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:5e:f1 10.100.0.13'], port_security=['fa:16:3e:d9:5e:f1 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '2bd8bf21-1f6b-42c9-9656-9a72fa8dcbf3'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7531b461-59ec-4261-8f6f-125c07fbf626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c78b28f7-c251-4d74-863e-8d5520acbae0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c47758289b4c448c95f927a91d89e09f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '37ce4296-f479-41b4-95bb-85317226edc9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8cdf3869-9618-417b-be95-470341634549, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=15a9c820-cacb-46bc-acd6-e8ed3b08432a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:24:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:36.816 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 15a9c820-cacb-46bc-acd6-e8ed3b08432a in datapath c78b28f7-c251-4d74-863e-8d5520acbae0 unbound from our chassis
Oct 09 16:24:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:36.816 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c78b28f7-c251-4d74-863e-8d5520acbae0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:24:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:36.819 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[76d0a076-2154-4216-9f78-fbe244e7d1ec]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:24:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:36.819 28613 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0 namespace which is not needed anymore
Oct 09 16:24:36 compute-0 nova_compute[117331]: 2025-10-09 16:24:36.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:36 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Oct 09 16:24:36 compute-0 podman[146784]: 2025-10-09 16:24:36.852827471 +0000 UTC m=+0.077518928 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251007, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, config_id=multipathd)
Oct 09 16:24:36 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d0000000e.scope: Consumed 13.983s CPU time.
Oct 09 16:24:36 compute-0 systemd-machined[77487]: Machine qemu-9-instance-0000000e terminated.
Oct 09 16:24:36 compute-0 neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0[146504]: [NOTICE]   (146508) : haproxy version is 3.0.5-8e879a5
Oct 09 16:24:36 compute-0 neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0[146504]: [NOTICE]   (146508) : path to executable is /usr/sbin/haproxy
Oct 09 16:24:36 compute-0 neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0[146504]: [WARNING]  (146508) : Exiting Master process...
Oct 09 16:24:36 compute-0 podman[146826]: 2025-10-09 16:24:36.939066807 +0000 UTC m=+0.033959562 container kill 45cffa2ae58e30a7e8519d9d4471288a24e5df1427812c1750aff4cb76c6fa57 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 16:24:36 compute-0 neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0[146504]: [ALERT]    (146508) : Current worker (146510) exited with code 143 (Terminated)
Oct 09 16:24:36 compute-0 neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0[146504]: [WARNING]  (146508) : All workers exited. Exiting... (0)
Oct 09 16:24:36 compute-0 systemd[1]: libpod-45cffa2ae58e30a7e8519d9d4471288a24e5df1427812c1750aff4cb76c6fa57.scope: Deactivated successfully.
Oct 09 16:24:36 compute-0 conmon[146504]: conmon 45cffa2ae58e30a7e851 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-45cffa2ae58e30a7e8519d9d4471288a24e5df1427812c1750aff4cb76c6fa57.scope/container/memory.events
Oct 09 16:24:36 compute-0 podman[146842]: 2025-10-09 16:24:36.990468742 +0000 UTC m=+0.028296772 container died 45cffa2ae58e30a7e8519d9d4471288a24e5df1427812c1750aff4cb76c6fa57 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:24:36 compute-0 nova_compute[117331]: 2025-10-09 16:24:36.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-45cffa2ae58e30a7e8519d9d4471288a24e5df1427812c1750aff4cb76c6fa57-userdata-shm.mount: Deactivated successfully.
Oct 09 16:24:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-829ae67fdae167e34f2ba75540a5656fb5b648325b24a954f82696d9bf6ee1bc-merged.mount: Deactivated successfully.
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.047 2 DEBUG nova.virt.libvirt.guest [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.048 2 INFO nova.virt.libvirt.driver [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Migration operation has completed
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.049 2 INFO nova.compute.manager [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] _post_live_migration() is started..
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.052 2 DEBUG nova.virt.libvirt.driver [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.052 2 DEBUG nova.virt.libvirt.driver [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.053 2 DEBUG nova.virt.libvirt.driver [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Oct 09 16:24:37 compute-0 podman[146842]: 2025-10-09 16:24:37.056934108 +0000 UTC m=+0.094762108 container cleanup 45cffa2ae58e30a7e8519d9d4471288a24e5df1427812c1750aff4cb76c6fa57 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 09 16:24:37 compute-0 systemd[1]: libpod-conmon-45cffa2ae58e30a7e8519d9d4471288a24e5df1427812c1750aff4cb76c6fa57.scope: Deactivated successfully.
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.065 2 WARNING neutronclient.v2_0.client [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.065 2 WARNING neutronclient.v2_0.client [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.072 2 DEBUG nova.compute.manager [req-45365bb6-6c32-40e0-b1e1-6fea98465da3 req-72899c15-63c0-48fe-9695-22d1edf72df3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received event network-vif-unplugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.072 2 DEBUG oslo_concurrency.lockutils [req-45365bb6-6c32-40e0-b1e1-6fea98465da3 req-72899c15-63c0-48fe-9695-22d1edf72df3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.073 2 DEBUG oslo_concurrency.lockutils [req-45365bb6-6c32-40e0-b1e1-6fea98465da3 req-72899c15-63c0-48fe-9695-22d1edf72df3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.073 2 DEBUG oslo_concurrency.lockutils [req-45365bb6-6c32-40e0-b1e1-6fea98465da3 req-72899c15-63c0-48fe-9695-22d1edf72df3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.073 2 DEBUG nova.compute.manager [req-45365bb6-6c32-40e0-b1e1-6fea98465da3 req-72899c15-63c0-48fe-9695-22d1edf72df3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] No waiting events found dispatching network-vif-unplugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.074 2 DEBUG nova.compute.manager [req-45365bb6-6c32-40e0-b1e1-6fea98465da3 req-72899c15-63c0-48fe-9695-22d1edf72df3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received event network-vif-unplugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:24:37 compute-0 podman[146843]: 2025-10-09 16:24:37.078906678 +0000 UTC m=+0.110271512 container remove 45cffa2ae58e30a7e8519d9d4471288a24e5df1427812c1750aff4cb76c6fa57 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 09 16:24:37 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:37.084 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[abaa1056-e35a-4ffd-9b4e-eaedc31b0501]: (4, ("Thu Oct  9 04:24:36 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0 (45cffa2ae58e30a7e8519d9d4471288a24e5df1427812c1750aff4cb76c6fa57)\n45cffa2ae58e30a7e8519d9d4471288a24e5df1427812c1750aff4cb76c6fa57\nThu Oct  9 04:24:36 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0 (45cffa2ae58e30a7e8519d9d4471288a24e5df1427812c1750aff4cb76c6fa57)\n45cffa2ae58e30a7e8519d9d4471288a24e5df1427812c1750aff4cb76c6fa57\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:24:37 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:37.085 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[4556acba-fea3-4be0-ba47-3b9e509a1480]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:24:37 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:37.085 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c78b28f7-c251-4d74-863e-8d5520acbae0.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:24:37 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:37.085 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[01e128a9-b380-4752-8eed-0fb43aa31706]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:24:37 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:37.086 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc78b28f7-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:24:37 compute-0 kernel: tapc78b28f7-c0: left promiscuous mode
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:37 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:37.105 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[46bb8872-3f9d-494a-8adc-5c2ea2b2a4d6]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:24:37 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:37.133 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[6e84a514-2999-4e82-ab98-dd1833aacce8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:24:37 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:37.134 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa0242d-933d-46a5-8d1d-1a7eca420e67]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:24:37 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:37.149 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ab3e8570-c1df-4eba-b782-061a64b21141]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 196547, 'reachable_time': 41779, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 146895, 'error': None, 'target': 'ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:24:37 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:37.151 28727 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c78b28f7-c251-4d74-863e-8d5520acbae0 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 09 16:24:37 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:37.151 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[90926f74-ff52-4c28-9471-65ef85c6c63c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:24:37 compute-0 systemd[1]: run-netns-ovnmeta\x2dc78b28f7\x2dc251\x2d4d74\x2d863e\x2d8d5520acbae0.mount: Deactivated successfully.
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.365 2 DEBUG nova.compute.manager [req-6e5832a6-ff27-47f2-a6d1-2b3d8f38efc0 req-f8eba7fb-1fe9-41b7-8115-8d2f018c132b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received event network-vif-unplugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.367 2 DEBUG oslo_concurrency.lockutils [req-6e5832a6-ff27-47f2-a6d1-2b3d8f38efc0 req-f8eba7fb-1fe9-41b7-8115-8d2f018c132b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.368 2 DEBUG oslo_concurrency.lockutils [req-6e5832a6-ff27-47f2-a6d1-2b3d8f38efc0 req-f8eba7fb-1fe9-41b7-8115-8d2f018c132b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.368 2 DEBUG oslo_concurrency.lockutils [req-6e5832a6-ff27-47f2-a6d1-2b3d8f38efc0 req-f8eba7fb-1fe9-41b7-8115-8d2f018c132b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.369 2 DEBUG nova.compute.manager [req-6e5832a6-ff27-47f2-a6d1-2b3d8f38efc0 req-f8eba7fb-1fe9-41b7-8115-8d2f018c132b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] No waiting events found dispatching network-vif-unplugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.369 2 DEBUG nova.compute.manager [req-6e5832a6-ff27-47f2-a6d1-2b3d8f38efc0 req-f8eba7fb-1fe9-41b7-8115-8d2f018c132b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received event network-vif-unplugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.446 2 DEBUG nova.network.neutron [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Activated binding for port 15a9c820-cacb-46bc-acd6-e8ed3b08432a and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.447 2 DEBUG nova.compute.manager [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "address": "fa:16:3e:d9:5e:f1", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a9c820-ca", "ovs_interfaceid": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.449 2 DEBUG nova.virt.libvirt.vif [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:23:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-721204525',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-721204525',id=14,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:23:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c47758289b4c448c95f927a91d89e09f',ramdisk_id='',reservation_id='r-bg9c54xp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-51576386',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-51576386-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:24:15Z,user_data=None,user_id='2ed3ac10329446d8a5aae566951cae1e',uuid=7531b461-59ec-4261-8f6f-125c07fbf626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "address": "fa:16:3e:d9:5e:f1", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a9c820-ca", "ovs_interfaceid": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.449 2 DEBUG nova.network.os_vif_util [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "address": "fa:16:3e:d9:5e:f1", "network": {"id": "c78b28f7-c251-4d74-863e-8d5520acbae0", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-300799103-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95a019f93e534ec0ad7dca9e44d00556", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15a9c820-ca", "ovs_interfaceid": "15a9c820-cacb-46bc-acd6-e8ed3b08432a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.450 2 DEBUG nova.network.os_vif_util [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5e:f1,bridge_name='br-int',has_traffic_filtering=True,id=15a9c820-cacb-46bc-acd6-e8ed3b08432a,network=Network(c78b28f7-c251-4d74-863e-8d5520acbae0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a9c820-ca') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.451 2 DEBUG os_vif [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5e:f1,bridge_name='br-int',has_traffic_filtering=True,id=15a9c820-cacb-46bc-acd6-e8ed3b08432a,network=Network(c78b28f7-c251-4d74-863e-8d5520acbae0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a9c820-ca') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.454 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15a9c820-ca, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.460 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=c7455d1f-1953-4afc-be1b-c0171b4d74cf) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.464 2 INFO os_vif [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5e:f1,bridge_name='br-int',has_traffic_filtering=True,id=15a9c820-cacb-46bc-acd6-e8ed3b08432a,network=Network(c78b28f7-c251-4d74-863e-8d5520acbae0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15a9c820-ca')
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.465 2 DEBUG oslo_concurrency.lockutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.465 2 DEBUG oslo_concurrency.lockutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.466 2 DEBUG oslo_concurrency.lockutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.466 2 DEBUG nova.compute.manager [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.467 2 INFO nova.virt.libvirt.driver [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Deleting instance files /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626_del
Oct 09 16:24:37 compute-0 nova_compute[117331]: 2025-10-09 16:24:37.468 2 INFO nova.virt.libvirt.driver [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Deletion of /var/lib/nova/instances/7531b461-59ec-4261-8f6f-125c07fbf626_del complete
Oct 09 16:24:37 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:24:37.478 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.118 2 DEBUG nova.compute.manager [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received event network-vif-plugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.118 2 DEBUG oslo_concurrency.lockutils [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.118 2 DEBUG oslo_concurrency.lockutils [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.118 2 DEBUG oslo_concurrency.lockutils [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.119 2 DEBUG nova.compute.manager [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] No waiting events found dispatching network-vif-plugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.119 2 WARNING nova.compute.manager [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received unexpected event network-vif-plugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a for instance with vm_state active and task_state migrating.
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.119 2 DEBUG nova.compute.manager [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received event network-vif-unplugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.119 2 DEBUG oslo_concurrency.lockutils [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.119 2 DEBUG oslo_concurrency.lockutils [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.119 2 DEBUG oslo_concurrency.lockutils [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.120 2 DEBUG nova.compute.manager [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] No waiting events found dispatching network-vif-unplugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.120 2 DEBUG nova.compute.manager [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received event network-vif-unplugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.120 2 DEBUG nova.compute.manager [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received event network-vif-plugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.120 2 DEBUG oslo_concurrency.lockutils [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.120 2 DEBUG oslo_concurrency.lockutils [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.120 2 DEBUG oslo_concurrency.lockutils [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.120 2 DEBUG nova.compute.manager [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] No waiting events found dispatching network-vif-plugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.121 2 WARNING nova.compute.manager [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received unexpected event network-vif-plugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a for instance with vm_state active and task_state migrating.
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.121 2 DEBUG nova.compute.manager [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received event network-vif-plugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.121 2 DEBUG oslo_concurrency.lockutils [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.121 2 DEBUG oslo_concurrency.lockutils [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.121 2 DEBUG oslo_concurrency.lockutils [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.121 2 DEBUG nova.compute.manager [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] No waiting events found dispatching network-vif-plugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:24:39 compute-0 nova_compute[117331]: 2025-10-09 16:24:39.122 2 WARNING nova.compute.manager [req-0fd16497-5e36-4d95-8a9a-347a1145697b req-c6a56d5a-7084-4f74-a0fa-bdef61bca073 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Received unexpected event network-vif-plugged-15a9c820-cacb-46bc-acd6-e8ed3b08432a for instance with vm_state active and task_state migrating.
Oct 09 16:24:39 compute-0 podman[146896]: 2025-10-09 16:24:39.848977497 +0000 UTC m=+0.068570583 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 09 16:24:41 compute-0 nova_compute[117331]: 2025-10-09 16:24:41.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:42 compute-0 nova_compute[117331]: 2025-10-09 16:24:42.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:45 compute-0 podman[146923]: 2025-10-09 16:24:45.845256187 +0000 UTC m=+0.069452066 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 09 16:24:45 compute-0 podman[146922]: 2025-10-09 16:24:45.861526165 +0000 UTC m=+0.088805692 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:24:46 compute-0 nova_compute[117331]: 2025-10-09 16:24:46.001 2 DEBUG oslo_concurrency.lockutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:24:46 compute-0 nova_compute[117331]: 2025-10-09 16:24:46.002 2 DEBUG oslo_concurrency.lockutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:24:46 compute-0 nova_compute[117331]: 2025-10-09 16:24:46.002 2 DEBUG oslo_concurrency.lockutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "7531b461-59ec-4261-8f6f-125c07fbf626-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:24:46 compute-0 nova_compute[117331]: 2025-10-09 16:24:46.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:46 compute-0 nova_compute[117331]: 2025-10-09 16:24:46.518 2 DEBUG oslo_concurrency.lockutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:24:46 compute-0 nova_compute[117331]: 2025-10-09 16:24:46.518 2 DEBUG oslo_concurrency.lockutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:24:46 compute-0 nova_compute[117331]: 2025-10-09 16:24:46.518 2 DEBUG oslo_concurrency.lockutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:24:46 compute-0 nova_compute[117331]: 2025-10-09 16:24:46.518 2 DEBUG nova.compute.resource_tracker [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:24:46 compute-0 nova_compute[117331]: 2025-10-09 16:24:46.692 2 WARNING nova.virt.libvirt.driver [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:24:46 compute-0 nova_compute[117331]: 2025-10-09 16:24:46.693 2 DEBUG oslo_concurrency.processutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:24:46 compute-0 nova_compute[117331]: 2025-10-09 16:24:46.723 2 DEBUG oslo_concurrency.processutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.030s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:24:46 compute-0 nova_compute[117331]: 2025-10-09 16:24:46.725 2 DEBUG nova.compute.resource_tracker [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6156MB free_disk=73.25769424438477GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:24:46 compute-0 nova_compute[117331]: 2025-10-09 16:24:46.725 2 DEBUG oslo_concurrency.lockutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:24:46 compute-0 nova_compute[117331]: 2025-10-09 16:24:46.726 2 DEBUG oslo_concurrency.lockutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:24:47 compute-0 nova_compute[117331]: 2025-10-09 16:24:47.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:47 compute-0 nova_compute[117331]: 2025-10-09 16:24:47.746 2 DEBUG nova.compute.resource_tracker [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration for instance 7531b461-59ec-4261-8f6f-125c07fbf626 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 09 16:24:48 compute-0 nova_compute[117331]: 2025-10-09 16:24:48.256 2 DEBUG nova.compute.resource_tracker [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Oct 09 16:24:48 compute-0 nova_compute[117331]: 2025-10-09 16:24:48.291 2 DEBUG nova.compute.resource_tracker [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration d7553f96-341c-40c8-9725-099a4b3f8767 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:24:48 compute-0 nova_compute[117331]: 2025-10-09 16:24:48.292 2 DEBUG nova.compute.resource_tracker [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:24:48 compute-0 nova_compute[117331]: 2025-10-09 16:24:48.292 2 DEBUG nova.compute.resource_tracker [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:24:46 up 33 min,  0 user,  load average: 0.95, 0.57, 0.39\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:24:48 compute-0 nova_compute[117331]: 2025-10-09 16:24:48.328 2 DEBUG nova.compute.provider_tree [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:24:48 compute-0 nova_compute[117331]: 2025-10-09 16:24:48.837 2 DEBUG nova.scheduler.client.report [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:24:49 compute-0 nova_compute[117331]: 2025-10-09 16:24:49.350 2 DEBUG nova.compute.resource_tracker [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:24:49 compute-0 nova_compute[117331]: 2025-10-09 16:24:49.350 2 DEBUG oslo_concurrency.lockutils [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.625s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:24:49 compute-0 nova_compute[117331]: 2025-10-09 16:24:49.366 2 INFO nova.compute.manager [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Oct 09 16:24:50 compute-0 nova_compute[117331]: 2025-10-09 16:24:50.426 2 INFO nova.scheduler.client.report [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Deleted allocation for migration d7553f96-341c-40c8-9725-099a4b3f8767
Oct 09 16:24:50 compute-0 nova_compute[117331]: 2025-10-09 16:24:50.427 2 DEBUG nova.virt.libvirt.driver [None req-0f6f4f94-486e-4917-8619-3858302d5b76 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 7531b461-59ec-4261-8f6f-125c07fbf626] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Oct 09 16:24:51 compute-0 nova_compute[117331]: 2025-10-09 16:24:51.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:52 compute-0 nova_compute[117331]: 2025-10-09 16:24:52.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:54 compute-0 podman[146963]: 2025-10-09 16:24:54.856971415 +0000 UTC m=+0.084295147 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41)
Oct 09 16:24:54 compute-0 podman[146964]: 2025-10-09 16:24:54.86686935 +0000 UTC m=+0.086284791 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:24:56 compute-0 nova_compute[117331]: 2025-10-09 16:24:56.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:57 compute-0 nova_compute[117331]: 2025-10-09 16:24:57.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:24:59 compute-0 podman[127775]: time="2025-10-09T16:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:24:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:24:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3022 "" "Go-http-client/1.1"
Oct 09 16:25:01 compute-0 nova_compute[117331]: 2025-10-09 16:25:01.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:01 compute-0 openstack_network_exporter[129925]: ERROR   16:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:25:01 compute-0 openstack_network_exporter[129925]: ERROR   16:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:25:01 compute-0 openstack_network_exporter[129925]: ERROR   16:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:25:01 compute-0 openstack_network_exporter[129925]: ERROR   16:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:25:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:25:01 compute-0 openstack_network_exporter[129925]: ERROR   16:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:25:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:25:01 compute-0 nova_compute[117331]: 2025-10-09 16:25:01.933 2 DEBUG nova.compute.manager [None req-9c24ea85-ed52-4277-9837-961f93a7794f 769bfcc46cd44cde8622c2a2d5e02dbc b30e8cf5e10742f190212b4cb97ce2c9 - - default default] Removing trait COMPUTE_STATUS_DISABLED from compute node resource provider 593051b8-2000-437f-a915-2616fc8b1671 in placement. update_compute_provider_status /usr/lib/python3.12/site-packages/nova/compute/manager.py:631
Oct 09 16:25:01 compute-0 nova_compute[117331]: 2025-10-09 16:25:01.990 2 DEBUG nova.compute.provider_tree [None req-9c24ea85-ed52-4277-9837-961f93a7794f 769bfcc46cd44cde8622c2a2d5e02dbc b30e8cf5e10742f190212b4cb97ce2c9 - - default default] Updating resource provider 593051b8-2000-437f-a915-2616fc8b1671 generation from 23 to 26 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Oct 09 16:25:02 compute-0 nova_compute[117331]: 2025-10-09 16:25:02.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:06 compute-0 nova_compute[117331]: 2025-10-09 16:25:06.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:06 compute-0 nova_compute[117331]: 2025-10-09 16:25:06.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:07 compute-0 nova_compute[117331]: 2025-10-09 16:25:07.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:07 compute-0 podman[147012]: 2025-10-09 16:25:07.82821167 +0000 UTC m=+0.058416356 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Oct 09 16:25:10 compute-0 podman[147034]: 2025-10-09 16:25:10.823445534 +0000 UTC m=+0.057432114 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:25:11 compute-0 nova_compute[117331]: 2025-10-09 16:25:11.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:12 compute-0 nova_compute[117331]: 2025-10-09 16:25:12.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:15.021 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:5e:11 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18ef4241-0151-441c-abdc-42d4b3a21b30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c2c321aa8a494a8e8b49c81b79e3ceca', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ed87cdf-61a6-4df0-ac74-37289a0bcd5f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2ea79927-f6b6-48ed-a992-4066429c8e5d) old=Port_Binding(mac=['fa:16:3e:71:5e:11'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18ef4241-0151-441c-abdc-42d4b3a21b30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c2c321aa8a494a8e8b49c81b79e3ceca', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:25:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:15.022 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2ea79927-f6b6-48ed-a992-4066429c8e5d in datapath 18ef4241-0151-441c-abdc-42d4b3a21b30 updated
Oct 09 16:25:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:15.023 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 18ef4241-0151-441c-abdc-42d4b3a21b30, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:25:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:15.024 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[5954fd76-7412-436d-983c-1a13b69d31ad]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:16 compute-0 nova_compute[117331]: 2025-10-09 16:25:16.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:25:16 compute-0 nova_compute[117331]: 2025-10-09 16:25:16.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:16 compute-0 podman[147061]: 2025-10-09 16:25:16.828166944 +0000 UTC m=+0.053440798 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 09 16:25:16 compute-0 podman[147060]: 2025-10-09 16:25:16.840410163 +0000 UTC m=+0.063753095 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 09 16:25:17 compute-0 nova_compute[117331]: 2025-10-09 16:25:17.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:19 compute-0 nova_compute[117331]: 2025-10-09 16:25:19.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:25:21 compute-0 nova_compute[117331]: 2025-10-09 16:25:21.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:25:21 compute-0 nova_compute[117331]: 2025-10-09 16:25:21.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:21.540 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:30:15 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-10ded28b-9eac-4707-9a26-ca0e9992f8d7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10ded28b-9eac-4707-9a26-ca0e9992f8d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a345bddd4804404a55948133ea8150f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=547619a6-947b-4670-938e-d1bd0a059cc6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=81cd0c43-7790-430c-8b4d-96bd40b326f6) old=Port_Binding(mac=['fa:16:3e:32:30:15'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-10ded28b-9eac-4707-9a26-ca0e9992f8d7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10ded28b-9eac-4707-9a26-ca0e9992f8d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a345bddd4804404a55948133ea8150f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:25:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:21.541 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 81cd0c43-7790-430c-8b4d-96bd40b326f6 in datapath 10ded28b-9eac-4707-9a26-ca0e9992f8d7 updated
Oct 09 16:25:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:21.543 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 10ded28b-9eac-4707-9a26-ca0e9992f8d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:25:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:21.544 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[30260527-7e5e-4922-bb2d-03ab9581288c]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:21 compute-0 nova_compute[117331]: 2025-10-09 16:25:21.822 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:25:21 compute-0 nova_compute[117331]: 2025-10-09 16:25:21.823 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:25:21 compute-0 nova_compute[117331]: 2025-10-09 16:25:21.824 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:25:21 compute-0 nova_compute[117331]: 2025-10-09 16:25:21.824 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:25:21 compute-0 nova_compute[117331]: 2025-10-09 16:25:21.969 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:25:21 compute-0 nova_compute[117331]: 2025-10-09 16:25:21.970 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:25:22 compute-0 nova_compute[117331]: 2025-10-09 16:25:22.001 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.031s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:25:22 compute-0 nova_compute[117331]: 2025-10-09 16:25:22.002 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6195MB free_disk=73.2577133178711GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:25:22 compute-0 nova_compute[117331]: 2025-10-09 16:25:22.002 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:25:22 compute-0 nova_compute[117331]: 2025-10-09 16:25:22.003 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:25:22 compute-0 nova_compute[117331]: 2025-10-09 16:25:22.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:23 compute-0 nova_compute[117331]: 2025-10-09 16:25:23.050 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:25:23 compute-0 nova_compute[117331]: 2025-10-09 16:25:23.051 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:25:21 up 34 min,  0 user,  load average: 0.92, 0.59, 0.40\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:25:23 compute-0 nova_compute[117331]: 2025-10-09 16:25:23.073 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:25:23 compute-0 nova_compute[117331]: 2025-10-09 16:25:23.580 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:25:24 compute-0 nova_compute[117331]: 2025-10-09 16:25:24.092 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:25:24 compute-0 nova_compute[117331]: 2025-10-09 16:25:24.093 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.090s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:25:25 compute-0 nova_compute[117331]: 2025-10-09 16:25:25.093 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:25:25 compute-0 nova_compute[117331]: 2025-10-09 16:25:25.094 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:25:25 compute-0 nova_compute[117331]: 2025-10-09 16:25:25.094 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:25:25 compute-0 nova_compute[117331]: 2025-10-09 16:25:25.095 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:25:25 compute-0 nova_compute[117331]: 2025-10-09 16:25:25.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:25:25 compute-0 nova_compute[117331]: 2025-10-09 16:25:25.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:25:25 compute-0 podman[147099]: 2025-10-09 16:25:25.846528994 +0000 UTC m=+0.067100512 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41)
Oct 09 16:25:25 compute-0 podman[147100]: 2025-10-09 16:25:25.930519881 +0000 UTC m=+0.144774228 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 09 16:25:26 compute-0 nova_compute[117331]: 2025-10-09 16:25:26.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:27 compute-0 nova_compute[117331]: 2025-10-09 16:25:27.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:29 compute-0 podman[127775]: time="2025-10-09T16:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:25:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:25:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3023 "" "Go-http-client/1.1"
Oct 09 16:25:31 compute-0 nova_compute[117331]: 2025-10-09 16:25:31.038 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "da070580-b1f8-4571-b524-ce73d61665df" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:25:31 compute-0 nova_compute[117331]: 2025-10-09 16:25:31.038 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "da070580-b1f8-4571-b524-ce73d61665df" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:25:31 compute-0 nova_compute[117331]: 2025-10-09 16:25:31.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:31 compute-0 openstack_network_exporter[129925]: ERROR   16:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:25:31 compute-0 openstack_network_exporter[129925]: ERROR   16:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:25:31 compute-0 openstack_network_exporter[129925]: ERROR   16:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:25:31 compute-0 openstack_network_exporter[129925]: ERROR   16:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:25:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:25:31 compute-0 openstack_network_exporter[129925]: ERROR   16:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:25:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:25:31 compute-0 nova_compute[117331]: 2025-10-09 16:25:31.551 2 DEBUG nova.compute.manager [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:25:32 compute-0 nova_compute[117331]: 2025-10-09 16:25:32.100 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:25:32 compute-0 nova_compute[117331]: 2025-10-09 16:25:32.101 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:25:32 compute-0 nova_compute[117331]: 2025-10-09 16:25:32.109 2 DEBUG nova.virt.hardware [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:25:32 compute-0 nova_compute[117331]: 2025-10-09 16:25:32.110 2 INFO nova.compute.claims [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:25:32 compute-0 nova_compute[117331]: 2025-10-09 16:25:32.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:33 compute-0 nova_compute[117331]: 2025-10-09 16:25:33.160 2 DEBUG nova.compute.provider_tree [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:25:33 compute-0 nova_compute[117331]: 2025-10-09 16:25:33.668 2 DEBUG nova.scheduler.client.report [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:25:34 compute-0 nova_compute[117331]: 2025-10-09 16:25:34.181 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.080s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:25:34 compute-0 nova_compute[117331]: 2025-10-09 16:25:34.182 2 DEBUG nova.compute.manager [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:25:34 compute-0 nova_compute[117331]: 2025-10-09 16:25:34.694 2 DEBUG nova.compute.manager [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:25:34 compute-0 nova_compute[117331]: 2025-10-09 16:25:34.694 2 DEBUG nova.network.neutron [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:25:34 compute-0 nova_compute[117331]: 2025-10-09 16:25:34.695 2 WARNING neutronclient.v2_0.client [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:25:34 compute-0 nova_compute[117331]: 2025-10-09 16:25:34.695 2 WARNING neutronclient.v2_0.client [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:25:35 compute-0 nova_compute[117331]: 2025-10-09 16:25:35.202 2 INFO nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:25:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:35.312 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:25:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:35.313 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:25:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:35.313 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:25:35 compute-0 nova_compute[117331]: 2025-10-09 16:25:35.632 2 DEBUG nova.network.neutron [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Successfully created port: c06c63eb-935d-42e3-9672-1d18b6bca7ad _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:25:35 compute-0 nova_compute[117331]: 2025-10-09 16:25:35.711 2 DEBUG nova.compute.manager [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.168 2 DEBUG nova.network.neutron [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Successfully updated port: c06c63eb-935d-42e3-9672-1d18b6bca7ad _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.225 2 DEBUG nova.compute.manager [req-cac4ce3d-561b-4469-bb7b-d498f287adb3 req-a63193a2-6a8b-4a9d-a699-e2b61bd65e33 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Received event network-changed-c06c63eb-935d-42e3-9672-1d18b6bca7ad external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.226 2 DEBUG nova.compute.manager [req-cac4ce3d-561b-4469-bb7b-d498f287adb3 req-a63193a2-6a8b-4a9d-a699-e2b61bd65e33 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Refreshing instance network info cache due to event network-changed-c06c63eb-935d-42e3-9672-1d18b6bca7ad. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.227 2 DEBUG oslo_concurrency.lockutils [req-cac4ce3d-561b-4469-bb7b-d498f287adb3 req-a63193a2-6a8b-4a9d-a699-e2b61bd65e33 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-da070580-b1f8-4571-b524-ce73d61665df" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.227 2 DEBUG oslo_concurrency.lockutils [req-cac4ce3d-561b-4469-bb7b-d498f287adb3 req-a63193a2-6a8b-4a9d-a699-e2b61bd65e33 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-da070580-b1f8-4571-b524-ce73d61665df" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.228 2 DEBUG nova.network.neutron [req-cac4ce3d-561b-4469-bb7b-d498f287adb3 req-a63193a2-6a8b-4a9d-a699-e2b61bd65e33 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Refreshing network info cache for port c06c63eb-935d-42e3-9672-1d18b6bca7ad _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.673 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "refresh_cache-da070580-b1f8-4571-b524-ce73d61665df" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.731 2 DEBUG nova.compute.manager [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.733 2 DEBUG nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.734 2 INFO nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Creating image(s)
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.735 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "/var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.735 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "/var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.737 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "/var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.738 2 DEBUG oslo_utils.imageutils.format_inspector [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.744 2 DEBUG oslo_utils.imageutils.format_inspector [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.746 2 WARNING neutronclient.v2_0.client [req-cac4ce3d-561b-4469-bb7b-d498f287adb3 req-a63193a2-6a8b-4a9d-a699-e2b61bd65e33 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.749 2 DEBUG oslo_concurrency.processutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.829 2 DEBUG nova.network.neutron [req-cac4ce3d-561b-4469-bb7b-d498f287adb3 req-a63193a2-6a8b-4a9d-a699-e2b61bd65e33 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.838 2 DEBUG oslo_concurrency.processutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.839 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.839 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.840 2 DEBUG oslo_utils.imageutils.format_inspector [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.844 2 DEBUG oslo_utils.imageutils.format_inspector [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.844 2 DEBUG oslo_concurrency.processutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.913 2 DEBUG oslo_concurrency.processutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.913 2 DEBUG oslo_concurrency.processutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.949 2 DEBUG oslo_concurrency.processutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/disk 1073741824" returned: 0 in 0.035s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.950 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.110s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.950 2 DEBUG oslo_concurrency.processutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:25:36 compute-0 nova_compute[117331]: 2025-10-09 16:25:36.970 2 DEBUG nova.network.neutron [req-cac4ce3d-561b-4469-bb7b-d498f287adb3 req-a63193a2-6a8b-4a9d-a699-e2b61bd65e33 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:25:37 compute-0 nova_compute[117331]: 2025-10-09 16:25:37.003 2 DEBUG oslo_concurrency.processutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:25:37 compute-0 nova_compute[117331]: 2025-10-09 16:25:37.004 2 DEBUG nova.virt.disk.api [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Checking if we can resize image /var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:25:37 compute-0 nova_compute[117331]: 2025-10-09 16:25:37.005 2 DEBUG oslo_concurrency.processutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:25:37 compute-0 nova_compute[117331]: 2025-10-09 16:25:37.059 2 DEBUG oslo_concurrency.processutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:25:37 compute-0 nova_compute[117331]: 2025-10-09 16:25:37.061 2 DEBUG nova.virt.disk.api [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Cannot resize image /var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:25:37 compute-0 nova_compute[117331]: 2025-10-09 16:25:37.061 2 DEBUG nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:25:37 compute-0 nova_compute[117331]: 2025-10-09 16:25:37.061 2 DEBUG nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Ensure instance console log exists: /var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:25:37 compute-0 nova_compute[117331]: 2025-10-09 16:25:37.062 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:25:37 compute-0 nova_compute[117331]: 2025-10-09 16:25:37.062 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:25:37 compute-0 nova_compute[117331]: 2025-10-09 16:25:37.062 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:25:37 compute-0 nova_compute[117331]: 2025-10-09 16:25:37.477 2 DEBUG oslo_concurrency.lockutils [req-cac4ce3d-561b-4469-bb7b-d498f287adb3 req-a63193a2-6a8b-4a9d-a699-e2b61bd65e33 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-da070580-b1f8-4571-b524-ce73d61665df" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:25:37 compute-0 nova_compute[117331]: 2025-10-09 16:25:37.478 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquired lock "refresh_cache-da070580-b1f8-4571-b524-ce73d61665df" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:25:37 compute-0 nova_compute[117331]: 2025-10-09 16:25:37.479 2 DEBUG nova.network.neutron [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:25:37 compute-0 nova_compute[117331]: 2025-10-09 16:25:37.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:38 compute-0 nova_compute[117331]: 2025-10-09 16:25:38.044 2 DEBUG nova.network.neutron [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:25:38 compute-0 nova_compute[117331]: 2025-10-09 16:25:38.295 2 WARNING neutronclient.v2_0.client [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:25:38 compute-0 nova_compute[117331]: 2025-10-09 16:25:38.606 2 DEBUG nova.network.neutron [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Updating instance_info_cache with network_info: [{"id": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "address": "fa:16:3e:49:af:77", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc06c63eb-93", "ovs_interfaceid": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:25:38 compute-0 podman[147163]: 2025-10-09 16:25:38.849329299 +0000 UTC m=+0.074460345 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.113 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Releasing lock "refresh_cache-da070580-b1f8-4571-b524-ce73d61665df" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.114 2 DEBUG nova.compute.manager [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Instance network_info: |[{"id": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "address": "fa:16:3e:49:af:77", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc06c63eb-93", "ovs_interfaceid": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.116 2 DEBUG nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Start _get_guest_xml network_info=[{"id": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "address": "fa:16:3e:49:af:77", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc06c63eb-93", "ovs_interfaceid": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.120 2 WARNING nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.122 2 DEBUG nova.virt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-978482825', uuid='da070580-b1f8-4571-b524-ce73d61665df'), owner=OwnerMeta(userid='c2c0f7c2f15e4da6881dc393064b0e16', username='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332-project-admin', projectid='1a345bddd4804404a55948133ea8150f', projectname='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "address": "fa:16:3e:49:af:77", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc06c63eb-93", "ovs_interfaceid": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760027139.1222029) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.126 2 DEBUG nova.virt.libvirt.host [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.127 2 DEBUG nova.virt.libvirt.host [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.129 2 DEBUG nova.virt.libvirt.host [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.130 2 DEBUG nova.virt.libvirt.host [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.130 2 DEBUG nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.130 2 DEBUG nova.virt.hardware [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.131 2 DEBUG nova.virt.hardware [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.131 2 DEBUG nova.virt.hardware [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.131 2 DEBUG nova.virt.hardware [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.131 2 DEBUG nova.virt.hardware [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.131 2 DEBUG nova.virt.hardware [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.131 2 DEBUG nova.virt.hardware [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.131 2 DEBUG nova.virt.hardware [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.132 2 DEBUG nova.virt.hardware [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.132 2 DEBUG nova.virt.hardware [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.132 2 DEBUG nova.virt.hardware [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.135 2 DEBUG nova.virt.libvirt.vif [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:25:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-978482825',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-978',id=16,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a345bddd4804404a55948133ea8150f',ramdisk_id='',reservation_id='r-3cl3cajx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:25:35Z,user_data=None,user_id='c2c0f7c2f15e4da6881dc393064b0e16',uuid=da070580-b1f8-4571-b524-ce73d61665df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "address": "fa:16:3e:49:af:77", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc06c63eb-93", "ovs_interfaceid": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.136 2 DEBUG nova.network.os_vif_util [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Converting VIF {"id": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "address": "fa:16:3e:49:af:77", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc06c63eb-93", "ovs_interfaceid": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.136 2 DEBUG nova.network.os_vif_util [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:af:77,bridge_name='br-int',has_traffic_filtering=True,id=c06c63eb-935d-42e3-9672-1d18b6bca7ad,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc06c63eb-93') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.137 2 DEBUG nova.objects.instance [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lazy-loading 'pci_devices' on Instance uuid da070580-b1f8-4571-b524-ce73d61665df obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:25:39 compute-0 ovn_controller[19752]: 2025-10-09T16:25:39Z|00139|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.645 2 DEBUG nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:25:39 compute-0 nova_compute[117331]:   <uuid>da070580-b1f8-4571-b524-ce73d61665df</uuid>
Oct 09 16:25:39 compute-0 nova_compute[117331]:   <name>instance-00000010</name>
Oct 09 16:25:39 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:25:39 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:25:39 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteNodeResourceConsolidationStrategy-server-978482825</nova:name>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:25:39</nova:creationTime>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:25:39 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:25:39 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:25:39 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:25:39 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:25:39 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:25:39 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:25:39 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:25:39 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:25:39 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:25:39 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:25:39 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:25:39 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:25:39 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:25:39 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:25:39 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:25:39 compute-0 nova_compute[117331]:         <nova:user uuid="c2c0f7c2f15e4da6881dc393064b0e16">tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332-project-admin</nova:user>
Oct 09 16:25:39 compute-0 nova_compute[117331]:         <nova:project uuid="1a345bddd4804404a55948133ea8150f">tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332</nova:project>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:25:39 compute-0 nova_compute[117331]:         <nova:port uuid="c06c63eb-935d-42e3-9672-1d18b6bca7ad">
Oct 09 16:25:39 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:25:39 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:25:39 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <system>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <entry name="serial">da070580-b1f8-4571-b524-ce73d61665df</entry>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <entry name="uuid">da070580-b1f8-4571-b524-ce73d61665df</entry>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     </system>
Oct 09 16:25:39 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:25:39 compute-0 nova_compute[117331]:   <os>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:   </os>
Oct 09 16:25:39 compute-0 nova_compute[117331]:   <features>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:   </features>
Oct 09 16:25:39 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:25:39 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:25:39 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/disk"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/disk.config"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:49:af:77"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <target dev="tapc06c63eb-93"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/console.log" append="off"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <video>
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     </video>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:25:39 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:25:39 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:25:39 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:25:39 compute-0 nova_compute[117331]: </domain>
Oct 09 16:25:39 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.647 2 DEBUG nova.compute.manager [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Preparing to wait for external event network-vif-plugged-c06c63eb-935d-42e3-9672-1d18b6bca7ad prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.647 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "da070580-b1f8-4571-b524-ce73d61665df-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.647 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "da070580-b1f8-4571-b524-ce73d61665df-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.647 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "da070580-b1f8-4571-b524-ce73d61665df-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.648 2 DEBUG nova.virt.libvirt.vif [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:25:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-978482825',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-978',id=16,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a345bddd4804404a55948133ea8150f',ramdisk_id='',reservation_id='r-3cl3cajx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:25:35Z,user_data=None,user_id='c2c0f7c2f15e4da6881dc393064b0e16',uuid=da070580-b1f8-4571-b524-ce73d61665df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "address": "fa:16:3e:49:af:77", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc06c63eb-93", "ovs_interfaceid": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.649 2 DEBUG nova.network.os_vif_util [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Converting VIF {"id": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "address": "fa:16:3e:49:af:77", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc06c63eb-93", "ovs_interfaceid": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.650 2 DEBUG nova.network.os_vif_util [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:af:77,bridge_name='br-int',has_traffic_filtering=True,id=c06c63eb-935d-42e3-9672-1d18b6bca7ad,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc06c63eb-93') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.650 2 DEBUG os_vif [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:af:77,bridge_name='br-int',has_traffic_filtering=True,id=c06c63eb-935d-42e3-9672-1d18b6bca7ad,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc06c63eb-93') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.652 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.653 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.654 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '18ce2ab5-3801-5274-a0cc-07270523e89e', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.661 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc06c63eb-93, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.661 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapc06c63eb-93, col_values=(('qos', UUID('2de89947-7f29-4c6b-a4d4-cc9efaf1e4e1')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.662 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapc06c63eb-93, col_values=(('external_ids', {'iface-id': 'c06c63eb-935d-42e3-9672-1d18b6bca7ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:af:77', 'vm-uuid': 'da070580-b1f8-4571-b524-ce73d61665df'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:39 compute-0 NetworkManager[1028]: <info>  [1760027139.6638] manager: (tapc06c63eb-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:39 compute-0 nova_compute[117331]: 2025-10-09 16:25:39.669 2 INFO os_vif [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:af:77,bridge_name='br-int',has_traffic_filtering=True,id=c06c63eb-935d-42e3-9672-1d18b6bca7ad,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc06c63eb-93')
Oct 09 16:25:41 compute-0 nova_compute[117331]: 2025-10-09 16:25:41.204 2 DEBUG nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:25:41 compute-0 nova_compute[117331]: 2025-10-09 16:25:41.205 2 DEBUG nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:25:41 compute-0 nova_compute[117331]: 2025-10-09 16:25:41.205 2 DEBUG nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] No VIF found with MAC fa:16:3e:49:af:77, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:25:41 compute-0 nova_compute[117331]: 2025-10-09 16:25:41.206 2 INFO nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Using config drive
Oct 09 16:25:41 compute-0 nova_compute[117331]: 2025-10-09 16:25:41.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:41 compute-0 nova_compute[117331]: 2025-10-09 16:25:41.720 2 WARNING neutronclient.v2_0.client [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:25:41 compute-0 podman[147185]: 2025-10-09 16:25:41.865911882 +0000 UTC m=+0.090618299 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:25:41 compute-0 nova_compute[117331]: 2025-10-09 16:25:41.939 2 INFO nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Creating config drive at /var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/disk.config
Oct 09 16:25:41 compute-0 nova_compute[117331]: 2025-10-09 16:25:41.944 2 DEBUG oslo_concurrency.processutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmptb174wdb execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:25:42 compute-0 nova_compute[117331]: 2025-10-09 16:25:42.072 2 DEBUG oslo_concurrency.processutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmptb174wdb" returned: 0 in 0.128s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:25:42 compute-0 kernel: tapc06c63eb-93: entered promiscuous mode
Oct 09 16:25:42 compute-0 NetworkManager[1028]: <info>  [1760027142.1326] manager: (tapc06c63eb-93): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Oct 09 16:25:42 compute-0 ovn_controller[19752]: 2025-10-09T16:25:42Z|00140|binding|INFO|Claiming lport c06c63eb-935d-42e3-9672-1d18b6bca7ad for this chassis.
Oct 09 16:25:42 compute-0 ovn_controller[19752]: 2025-10-09T16:25:42Z|00141|binding|INFO|c06c63eb-935d-42e3-9672-1d18b6bca7ad: Claiming fa:16:3e:49:af:77 10.100.0.13
Oct 09 16:25:42 compute-0 nova_compute[117331]: 2025-10-09 16:25:42.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.156 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:af:77 10.100.0.13'], port_security=['fa:16:3e:49:af:77 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'da070580-b1f8-4571-b524-ce73d61665df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18ef4241-0151-441c-abdc-42d4b3a21b30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a345bddd4804404a55948133ea8150f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9689e159-b15c-4e2f-9c7b-a0ccf3c3f578', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ed87cdf-61a6-4df0-ac74-37289a0bcd5f, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=c06c63eb-935d-42e3-9672-1d18b6bca7ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.157 28613 INFO neutron.agent.ovn.metadata.agent [-] Port c06c63eb-935d-42e3-9672-1d18b6bca7ad in datapath 18ef4241-0151-441c-abdc-42d4b3a21b30 bound to our chassis
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.158 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18ef4241-0151-441c-abdc-42d4b3a21b30
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.172 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[07f3afbb-7921-46bf-a186-1163eee69dd6]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.173 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap18ef4241-01 in ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.174 139687 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap18ef4241-00 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.174 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[583bee77-e83c-4acf-9e05-28fbdf9a401b]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:42 compute-0 systemd-udevd[147228]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.175 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[736d2418-5fd6-4f1c-b5c5-65988b76b0aa]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:42 compute-0 systemd-machined[77487]: New machine qemu-10-instance-00000010.
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.186 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[d049b30b-ae6e-4b9b-8f1a-10a2e2f06a66]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:42 compute-0 NetworkManager[1028]: <info>  [1760027142.1979] device (tapc06c63eb-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:25:42 compute-0 NetworkManager[1028]: <info>  [1760027142.1987] device (tapc06c63eb-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:25:42 compute-0 nova_compute[117331]: 2025-10-09 16:25:42.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.208 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[c28beea4-b11e-43f6-889f-f36102fe7f9e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:42 compute-0 ovn_controller[19752]: 2025-10-09T16:25:42Z|00142|binding|INFO|Setting lport c06c63eb-935d-42e3-9672-1d18b6bca7ad ovn-installed in OVS
Oct 09 16:25:42 compute-0 ovn_controller[19752]: 2025-10-09T16:25:42Z|00143|binding|INFO|Setting lport c06c63eb-935d-42e3-9672-1d18b6bca7ad up in Southbound
Oct 09 16:25:42 compute-0 nova_compute[117331]: 2025-10-09 16:25:42.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:42 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-00000010.
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.239 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[d525ac57-3ae5-4b22-8a28-94243eae6230]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.243 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[48103577-17c7-4a59-97d2-97a805c554a2]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:42 compute-0 NetworkManager[1028]: <info>  [1760027142.2443] manager: (tap18ef4241-00): new Veth device (/org/freedesktop/NetworkManager/Devices/58)
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.280 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2e3654-e618-472a-8659-0da03f84582b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.283 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[ac37b6a6-9995-4223-aee6-6650a5921e9f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:42 compute-0 NetworkManager[1028]: <info>  [1760027142.3083] device (tap18ef4241-00): carrier: link connected
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.317 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[161aeb50-4e65-4500-9275-d917e58c64a5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.334 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[e0be7c2c-6e17-4398-8e0e-acc10e4e8a2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18ef4241-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:5e:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 208591, 'reachable_time': 40615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 147261, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.349 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd62c49-d8ce-44f9-a244-19fa42d9b50c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:5e11'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 208591, 'tstamp': 208591}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 147262, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:42 compute-0 nova_compute[117331]: 2025-10-09 16:25:42.363 2 DEBUG nova.compute.manager [req-f409ffe7-9edf-448b-a5c2-fb04c4d42759 req-6b70d601-0487-47ad-ae7d-019b143d1739 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Received event network-vif-plugged-c06c63eb-935d-42e3-9672-1d18b6bca7ad external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:25:42 compute-0 nova_compute[117331]: 2025-10-09 16:25:42.364 2 DEBUG oslo_concurrency.lockutils [req-f409ffe7-9edf-448b-a5c2-fb04c4d42759 req-6b70d601-0487-47ad-ae7d-019b143d1739 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "da070580-b1f8-4571-b524-ce73d61665df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:25:42 compute-0 nova_compute[117331]: 2025-10-09 16:25:42.364 2 DEBUG oslo_concurrency.lockutils [req-f409ffe7-9edf-448b-a5c2-fb04c4d42759 req-6b70d601-0487-47ad-ae7d-019b143d1739 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "da070580-b1f8-4571-b524-ce73d61665df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:25:42 compute-0 nova_compute[117331]: 2025-10-09 16:25:42.364 2 DEBUG oslo_concurrency.lockutils [req-f409ffe7-9edf-448b-a5c2-fb04c4d42759 req-6b70d601-0487-47ad-ae7d-019b143d1739 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "da070580-b1f8-4571-b524-ce73d61665df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:25:42 compute-0 nova_compute[117331]: 2025-10-09 16:25:42.364 2 DEBUG nova.compute.manager [req-f409ffe7-9edf-448b-a5c2-fb04c4d42759 req-6b70d601-0487-47ad-ae7d-019b143d1739 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Processing event network-vif-plugged-c06c63eb-935d-42e3-9672-1d18b6bca7ad _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.373 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[465c8204-6287-4c28-80b1-e5ea40db34c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18ef4241-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:5e:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 208591, 'reachable_time': 40615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 147263, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.404 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[8f34a4f6-f9c5-4a46-919f-a3a7ea8539ef]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.449 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[74759be1-7d1a-4c3c-98fc-412890a9e4c6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.450 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18ef4241-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.451 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.451 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18ef4241-00, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:25:42 compute-0 nova_compute[117331]: 2025-10-09 16:25:42.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:42 compute-0 NetworkManager[1028]: <info>  [1760027142.4532] manager: (tap18ef4241-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Oct 09 16:25:42 compute-0 kernel: tap18ef4241-00: entered promiscuous mode
Oct 09 16:25:42 compute-0 nova_compute[117331]: 2025-10-09 16:25:42.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.455 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18ef4241-00, col_values=(('external_ids', {'iface-id': '2ea79927-f6b6-48ed-a992-4066429c8e5d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:25:42 compute-0 ovn_controller[19752]: 2025-10-09T16:25:42Z|00144|binding|INFO|Releasing lport 2ea79927-f6b6-48ed-a992-4066429c8e5d from this chassis (sb_readonly=0)
Oct 09 16:25:42 compute-0 nova_compute[117331]: 2025-10-09 16:25:42.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:42 compute-0 nova_compute[117331]: 2025-10-09 16:25:42.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.469 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae21ba6-f479-4b8e-abeb-87eccbf4ac0c]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.469 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.469 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.469 28613 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 18ef4241-0151-441c-abdc-42d4b3a21b30 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.469 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.470 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[18679149-eac3-424f-9b96-e5370bbb32c1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.470 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.470 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[77e18b2e-91b7-4dfe-b677-798b722e59d8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.470 28613 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: global
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     log         /dev/log local0 debug
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     log-tag     haproxy-metadata-proxy-18ef4241-0151-441c-abdc-42d4b3a21b30
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     user        root
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     group       root
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     maxconn     1024
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     pidfile     /var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     daemon
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: defaults
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     log global
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     mode http
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     option httplog
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     option dontlognull
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     option http-server-close
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     option forwardfor
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     retries                 3
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     timeout http-request    30s
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     timeout connect         30s
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     timeout client          32s
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     timeout server          32s
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     timeout http-keep-alive 30s
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: listen listener
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     bind 169.254.169.254:80
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:     http-request add-header X-OVN-Network-ID 18ef4241-0151-441c-abdc-42d4b3a21b30
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 09 16:25:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:42.471 28613 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'env', 'PROCESS_TAG=haproxy-18ef4241-0151-441c-abdc-42d4b3a21b30', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/18ef4241-0151-441c-abdc-42d4b3a21b30.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 09 16:25:42 compute-0 podman[147302]: 2025-10-09 16:25:42.930514349 +0000 UTC m=+0.056899579 container create e8269e920b5522cbba4cf2ad78c7368a87540711f08e7ef23dc6d78d32018dc0 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:25:42 compute-0 systemd[1]: Started libpod-conmon-e8269e920b5522cbba4cf2ad78c7368a87540711f08e7ef23dc6d78d32018dc0.scope.
Oct 09 16:25:42 compute-0 podman[147302]: 2025-10-09 16:25:42.901351272 +0000 UTC m=+0.027736542 image pull 4e15c189a95c5dcd1203c76fe304de5c8f8616da9a258ca168cc54a517f49b87 38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 09 16:25:43 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:25:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4569d2da3806b3a50dcace2435eb4a9eb13bd949c43ce36edbd17422be35176/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 16:25:43 compute-0 podman[147302]: 2025-10-09 16:25:43.034744768 +0000 UTC m=+0.161130058 container init e8269e920b5522cbba4cf2ad78c7368a87540711f08e7ef23dc6d78d32018dc0 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Oct 09 16:25:43 compute-0 podman[147302]: 2025-10-09 16:25:43.045095227 +0000 UTC m=+0.171480487 container start e8269e920b5522cbba4cf2ad78c7368a87540711f08e7ef23dc6d78d32018dc0 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Oct 09 16:25:43 compute-0 neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30[147319]: [NOTICE]   (147323) : New worker (147325) forked
Oct 09 16:25:43 compute-0 neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30[147319]: [NOTICE]   (147323) : Loading success.
Oct 09 16:25:43 compute-0 nova_compute[117331]: 2025-10-09 16:25:43.158 2 DEBUG nova.compute.manager [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:25:43 compute-0 nova_compute[117331]: 2025-10-09 16:25:43.163 2 DEBUG nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:25:43 compute-0 nova_compute[117331]: 2025-10-09 16:25:43.167 2 INFO nova.virt.libvirt.driver [-] [instance: da070580-b1f8-4571-b524-ce73d61665df] Instance spawned successfully.
Oct 09 16:25:43 compute-0 nova_compute[117331]: 2025-10-09 16:25:43.167 2 DEBUG nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:25:43 compute-0 nova_compute[117331]: 2025-10-09 16:25:43.683 2 DEBUG nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:25:43 compute-0 nova_compute[117331]: 2025-10-09 16:25:43.684 2 DEBUG nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:25:43 compute-0 nova_compute[117331]: 2025-10-09 16:25:43.685 2 DEBUG nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:25:43 compute-0 nova_compute[117331]: 2025-10-09 16:25:43.686 2 DEBUG nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:25:43 compute-0 nova_compute[117331]: 2025-10-09 16:25:43.687 2 DEBUG nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:25:43 compute-0 nova_compute[117331]: 2025-10-09 16:25:43.688 2 DEBUG nova.virt.libvirt.driver [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:25:44 compute-0 nova_compute[117331]: 2025-10-09 16:25:44.201 2 INFO nova.compute.manager [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Took 7.47 seconds to spawn the instance on the hypervisor.
Oct 09 16:25:44 compute-0 nova_compute[117331]: 2025-10-09 16:25:44.201 2 DEBUG nova.compute.manager [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:25:44 compute-0 nova_compute[117331]: 2025-10-09 16:25:44.422 2 DEBUG nova.compute.manager [req-bfe9df5a-9983-4c90-8e40-3eb52cc0f039 req-1c12beea-06e3-441f-a983-303970942670 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Received event network-vif-plugged-c06c63eb-935d-42e3-9672-1d18b6bca7ad external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:25:44 compute-0 nova_compute[117331]: 2025-10-09 16:25:44.422 2 DEBUG oslo_concurrency.lockutils [req-bfe9df5a-9983-4c90-8e40-3eb52cc0f039 req-1c12beea-06e3-441f-a983-303970942670 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "da070580-b1f8-4571-b524-ce73d61665df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:25:44 compute-0 nova_compute[117331]: 2025-10-09 16:25:44.423 2 DEBUG oslo_concurrency.lockutils [req-bfe9df5a-9983-4c90-8e40-3eb52cc0f039 req-1c12beea-06e3-441f-a983-303970942670 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "da070580-b1f8-4571-b524-ce73d61665df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:25:44 compute-0 nova_compute[117331]: 2025-10-09 16:25:44.423 2 DEBUG oslo_concurrency.lockutils [req-bfe9df5a-9983-4c90-8e40-3eb52cc0f039 req-1c12beea-06e3-441f-a983-303970942670 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "da070580-b1f8-4571-b524-ce73d61665df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:25:44 compute-0 nova_compute[117331]: 2025-10-09 16:25:44.424 2 DEBUG nova.compute.manager [req-bfe9df5a-9983-4c90-8e40-3eb52cc0f039 req-1c12beea-06e3-441f-a983-303970942670 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] No waiting events found dispatching network-vif-plugged-c06c63eb-935d-42e3-9672-1d18b6bca7ad pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:25:44 compute-0 nova_compute[117331]: 2025-10-09 16:25:44.424 2 WARNING nova.compute.manager [req-bfe9df5a-9983-4c90-8e40-3eb52cc0f039 req-1c12beea-06e3-441f-a983-303970942670 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Received unexpected event network-vif-plugged-c06c63eb-935d-42e3-9672-1d18b6bca7ad for instance with vm_state active and task_state None.
Oct 09 16:25:44 compute-0 nova_compute[117331]: 2025-10-09 16:25:44.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:44 compute-0 nova_compute[117331]: 2025-10-09 16:25:44.741 2 INFO nova.compute.manager [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Took 12.68 seconds to build instance.
Oct 09 16:25:45 compute-0 nova_compute[117331]: 2025-10-09 16:25:45.247 2 DEBUG oslo_concurrency.lockutils [None req-2c1c996e-ac28-41e0-9576-f4a2f89c1104 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "da070580-b1f8-4571-b524-ce73d61665df" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.209s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:25:46 compute-0 nova_compute[117331]: 2025-10-09 16:25:46.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:47 compute-0 podman[147334]: 2025-10-09 16:25:47.877242853 +0000 UTC m=+0.090120193 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:25:47 compute-0 podman[147335]: 2025-10-09 16:25:47.884660288 +0000 UTC m=+0.096752583 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_id=iscsid, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0)
Oct 09 16:25:49 compute-0 nova_compute[117331]: 2025-10-09 16:25:49.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:51 compute-0 nova_compute[117331]: 2025-10-09 16:25:51.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:52.258 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:25:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:25:52.258 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:25:52 compute-0 nova_compute[117331]: 2025-10-09 16:25:52.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:54 compute-0 nova_compute[117331]: 2025-10-09 16:25:54.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:55 compute-0 ovn_controller[19752]: 2025-10-09T16:25:55Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:49:af:77 10.100.0.13
Oct 09 16:25:55 compute-0 ovn_controller[19752]: 2025-10-09T16:25:55Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:af:77 10.100.0.13
Oct 09 16:25:56 compute-0 nova_compute[117331]: 2025-10-09 16:25:56.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:56 compute-0 podman[147381]: 2025-10-09 16:25:56.861468209 +0000 UTC m=+0.076204311 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Oct 09 16:25:56 compute-0 podman[147382]: 2025-10-09 16:25:56.905645192 +0000 UTC m=+0.123540174 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 09 16:25:59 compute-0 nova_compute[117331]: 2025-10-09 16:25:59.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:25:59 compute-0 podman[127775]: time="2025-10-09T16:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:25:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:25:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3486 "" "Go-http-client/1.1"
Oct 09 16:26:01 compute-0 openstack_network_exporter[129925]: ERROR   16:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:26:01 compute-0 openstack_network_exporter[129925]: ERROR   16:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:26:01 compute-0 openstack_network_exporter[129925]: ERROR   16:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:26:01 compute-0 openstack_network_exporter[129925]: ERROR   16:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:26:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:26:01 compute-0 openstack_network_exporter[129925]: ERROR   16:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:26:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:26:01 compute-0 nova_compute[117331]: 2025-10-09 16:26:01.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:02 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:02.259 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:26:04 compute-0 nova_compute[117331]: 2025-10-09 16:26:04.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:06 compute-0 nova_compute[117331]: 2025-10-09 16:26:06.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:09 compute-0 nova_compute[117331]: 2025-10-09 16:26:09.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:09 compute-0 podman[147429]: 2025-10-09 16:26:09.835237781 +0000 UTC m=+0.069413235 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Oct 09 16:26:11 compute-0 nova_compute[117331]: 2025-10-09 16:26:11.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:12 compute-0 podman[147449]: 2025-10-09 16:26:12.811795372 +0000 UTC m=+0.048221042 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:26:13 compute-0 nova_compute[117331]: 2025-10-09 16:26:13.634 2 DEBUG nova.virt.libvirt.driver [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Creating tmpfile /var/lib/nova/instances/tmpkzjy4nft to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Oct 09 16:26:13 compute-0 nova_compute[117331]: 2025-10-09 16:26:13.635 2 WARNING neutronclient.v2_0.client [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:26:13 compute-0 nova_compute[117331]: 2025-10-09 16:26:13.655 2 DEBUG nova.compute.manager [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpkzjy4nft',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Oct 09 16:26:14 compute-0 nova_compute[117331]: 2025-10-09 16:26:14.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:15 compute-0 nova_compute[117331]: 2025-10-09 16:26:15.694 2 WARNING neutronclient.v2_0.client [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:26:16 compute-0 nova_compute[117331]: 2025-10-09 16:26:16.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:17 compute-0 nova_compute[117331]: 2025-10-09 16:26:17.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:26:18 compute-0 podman[147477]: 2025-10-09 16:26:18.856634495 +0000 UTC m=+0.080746985 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 09 16:26:18 compute-0 podman[147476]: 2025-10-09 16:26:18.864518855 +0000 UTC m=+0.085321759 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 09 16:26:19 compute-0 nova_compute[117331]: 2025-10-09 16:26:19.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:26:19 compute-0 nova_compute[117331]: 2025-10-09 16:26:19.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:20 compute-0 nova_compute[117331]: 2025-10-09 16:26:20.155 2 DEBUG nova.compute.manager [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpkzjy4nft',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='b5296471-a1e3-40b2-8cee-8028a741040c',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Oct 09 16:26:21 compute-0 nova_compute[117331]: 2025-10-09 16:26:21.175 2 DEBUG oslo_concurrency.lockutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-b5296471-a1e3-40b2-8cee-8028a741040c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:26:21 compute-0 nova_compute[117331]: 2025-10-09 16:26:21.175 2 DEBUG oslo_concurrency.lockutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-b5296471-a1e3-40b2-8cee-8028a741040c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:26:21 compute-0 nova_compute[117331]: 2025-10-09 16:26:21.175 2 DEBUG nova.network.neutron [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:26:21 compute-0 nova_compute[117331]: 2025-10-09 16:26:21.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:26:21 compute-0 nova_compute[117331]: 2025-10-09 16:26:21.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:21 compute-0 nova_compute[117331]: 2025-10-09 16:26:21.685 2 WARNING neutronclient.v2_0.client [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:26:21 compute-0 nova_compute[117331]: 2025-10-09 16:26:21.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:26:21 compute-0 nova_compute[117331]: 2025-10-09 16:26:21.821 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:26:21 compute-0 nova_compute[117331]: 2025-10-09 16:26:21.822 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:26:21 compute-0 nova_compute[117331]: 2025-10-09 16:26:21.822 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:26:22 compute-0 nova_compute[117331]: 2025-10-09 16:26:22.165 2 WARNING neutronclient.v2_0.client [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:26:22 compute-0 nova_compute[117331]: 2025-10-09 16:26:22.350 2 DEBUG nova.network.neutron [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Updating instance_info_cache with network_info: [{"id": "909ef769-15e5-4c3c-b440-6eb8f378e71f", "address": "fa:16:3e:65:76:ba", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap909ef769-15", "ovs_interfaceid": "909ef769-15e5-4c3c-b440-6eb8f378e71f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:26:22 compute-0 nova_compute[117331]: 2025-10-09 16:26:22.856 2 DEBUG oslo_concurrency.lockutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-b5296471-a1e3-40b2-8cee-8028a741040c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:26:22 compute-0 nova_compute[117331]: 2025-10-09 16:26:22.864 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:26:22 compute-0 nova_compute[117331]: 2025-10-09 16:26:22.892 2 DEBUG nova.virt.libvirt.driver [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpkzjy4nft',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='b5296471-a1e3-40b2-8cee-8028a741040c',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Oct 09 16:26:22 compute-0 nova_compute[117331]: 2025-10-09 16:26:22.894 2 DEBUG nova.virt.libvirt.driver [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Creating instance directory: /var/lib/nova/instances/b5296471-a1e3-40b2-8cee-8028a741040c pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Oct 09 16:26:22 compute-0 nova_compute[117331]: 2025-10-09 16:26:22.894 2 DEBUG nova.virt.libvirt.driver [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Creating disk.info with the contents: {'/var/lib/nova/instances/b5296471-a1e3-40b2-8cee-8028a741040c/disk': 'qcow2', '/var/lib/nova/instances/b5296471-a1e3-40b2-8cee-8028a741040c/disk.config': 'raw'} pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11764
Oct 09 16:26:22 compute-0 nova_compute[117331]: 2025-10-09 16:26:22.895 2 DEBUG nova.virt.libvirt.driver [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11774
Oct 09 16:26:22 compute-0 nova_compute[117331]: 2025-10-09 16:26:22.896 2 DEBUG nova.objects.instance [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b5296471-a1e3-40b2-8cee-8028a741040c obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:26:22 compute-0 nova_compute[117331]: 2025-10-09 16:26:22.936 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:26:22 compute-0 nova_compute[117331]: 2025-10-09 16:26:22.937 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:22.999 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.130 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.131 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.147 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.016s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.148 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5995MB free_disk=73.22867965698242GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.148 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.148 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.403 2 DEBUG oslo_utils.imageutils.format_inspector [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.411 2 DEBUG oslo_utils.imageutils.format_inspector [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.414 2 DEBUG oslo_concurrency.processutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.490 2 DEBUG oslo_concurrency.processutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.491 2 DEBUG oslo_concurrency.lockutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.491 2 DEBUG oslo_concurrency.lockutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.492 2 DEBUG oslo_utils.imageutils.format_inspector [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.496 2 DEBUG oslo_utils.imageutils.format_inspector [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.496 2 DEBUG oslo_concurrency.processutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.558 2 DEBUG oslo_concurrency.processutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.559 2 DEBUG oslo_concurrency.processutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/b5296471-a1e3-40b2-8cee-8028a741040c/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.603 2 DEBUG oslo_concurrency.processutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/b5296471-a1e3-40b2-8cee-8028a741040c/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.605 2 DEBUG oslo_concurrency.lockutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.114s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.606 2 DEBUG oslo_concurrency.processutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.656 2 DEBUG oslo_concurrency.processutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.657 2 DEBUG nova.virt.disk.api [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Checking if we can resize image /var/lib/nova/instances/b5296471-a1e3-40b2-8cee-8028a741040c/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.658 2 DEBUG oslo_concurrency.processutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5296471-a1e3-40b2-8cee-8028a741040c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.740 2 DEBUG oslo_concurrency.processutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5296471-a1e3-40b2-8cee-8028a741040c/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.741 2 DEBUG nova.virt.disk.api [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Cannot resize image /var/lib/nova/instances/b5296471-a1e3-40b2-8cee-8028a741040c/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:26:23 compute-0 nova_compute[117331]: 2025-10-09 16:26:23.742 2 DEBUG nova.objects.instance [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'migration_context' on Instance uuid b5296471-a1e3-40b2-8cee-8028a741040c obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.181 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Migration for instance b5296471-a1e3-40b2-8cee-8028a741040c refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.251 2 DEBUG nova.objects.base [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Object Instance<b5296471-a1e3-40b2-8cee-8028a741040c> lazy-loaded attributes: trusted_certs,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.252 2 DEBUG oslo_concurrency.processutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/b5296471-a1e3-40b2-8cee-8028a741040c/disk.config 497664 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.284 2 DEBUG oslo_concurrency.processutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/b5296471-a1e3-40b2-8cee-8028a741040c/disk.config 497664" returned: 0 in 0.032s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.286 2 DEBUG nova.virt.libvirt.driver [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.288 2 DEBUG nova.virt.libvirt.vif [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-09T16:25:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-1733509377',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-173',id=17,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:26:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1a345bddd4804404a55948133ea8150f',ramdisk_id='',reservation_id='r-ew4rueq2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:26:01Z,user_data=None,user_id='c2c0f7c2f15e4da6881dc393064b0e16',uuid=b5296471-a1e3-40b2-8cee-8028a741040c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "909ef769-15e5-4c3c-b440-6eb8f378e71f", "address": "fa:16:3e:65:76:ba", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap909ef769-15", "ovs_interfaceid": "909ef769-15e5-4c3c-b440-6eb8f378e71f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.289 2 DEBUG nova.network.os_vif_util [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "909ef769-15e5-4c3c-b440-6eb8f378e71f", "address": "fa:16:3e:65:76:ba", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap909ef769-15", "ovs_interfaceid": "909ef769-15e5-4c3c-b440-6eb8f378e71f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.291 2 DEBUG nova.network.os_vif_util [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ba,bridge_name='br-int',has_traffic_filtering=True,id=909ef769-15e5-4c3c-b440-6eb8f378e71f,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap909ef769-15') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.291 2 DEBUG os_vif [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ba,bridge_name='br-int',has_traffic_filtering=True,id=909ef769-15e5-4c3c-b440-6eb8f378e71f,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap909ef769-15') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.293 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.294 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.296 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '91097698-e92e-52fe-8f06-0f81e5ee3a24', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.306 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap909ef769-15, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.307 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap909ef769-15, col_values=(('qos', UUID('fd281c9b-3fb2-4411-b6fb-4da49bca9c8a')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.308 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap909ef769-15, col_values=(('external_ids', {'iface-id': '909ef769-15e5-4c3c-b440-6eb8f378e71f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:76:ba', 'vm-uuid': 'b5296471-a1e3-40b2-8cee-8028a741040c'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:24 compute-0 NetworkManager[1028]: <info>  [1760027184.3110] manager: (tap909ef769-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.321 2 INFO os_vif [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ba,bridge_name='br-int',has_traffic_filtering=True,id=909ef769-15e5-4c3c-b440-6eb8f378e71f,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap909ef769-15')
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.322 2 DEBUG nova.virt.libvirt.driver [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.323 2 DEBUG nova.compute.manager [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpkzjy4nft',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='b5296471-a1e3-40b2-8cee-8028a741040c',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.324 2 WARNING neutronclient.v2_0.client [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.391 2 WARNING neutronclient.v2_0.client [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.691 2 INFO nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Updating resource usage from migration 190f046e-207f-48ac-aa41-d06e58048482
Oct 09 16:26:24 compute-0 nova_compute[117331]: 2025-10-09 16:26:24.692 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Starting to track incoming migration 190f046e-207f-48ac-aa41-d06e58048482 with flavor 5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3 _update_usage_from_migration /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1536
Oct 09 16:26:25 compute-0 nova_compute[117331]: 2025-10-09 16:26:25.239 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance da070580-b1f8-4571-b524-ce73d61665df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:26:25 compute-0 nova_compute[117331]: 2025-10-09 16:26:25.452 2 DEBUG nova.network.neutron [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Port 909ef769-15e5-4c3c-b440-6eb8f378e71f updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Oct 09 16:26:25 compute-0 nova_compute[117331]: 2025-10-09 16:26:25.466 2 DEBUG nova.compute.manager [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpkzjy4nft',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='b5296471-a1e3-40b2-8cee-8028a741040c',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Oct 09 16:26:25 compute-0 nova_compute[117331]: 2025-10-09 16:26:25.746 2 WARNING nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance b5296471-a1e3-40b2-8cee-8028a741040c has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.
Oct 09 16:26:25 compute-0 nova_compute[117331]: 2025-10-09 16:26:25.747 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:26:25 compute-0 nova_compute[117331]: 2025-10-09 16:26:25.747 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:26:23 up 35 min,  0 user,  load average: 0.85, 0.65, 0.43\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_1a345bddd4804404a55948133ea8150f': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:26:25 compute-0 nova_compute[117331]: 2025-10-09 16:26:25.760 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing inventories for resource provider 593051b8-2000-437f-a915-2616fc8b1671 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 09 16:26:25 compute-0 nova_compute[117331]: 2025-10-09 16:26:25.772 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating ProviderTree inventory for provider 593051b8-2000-437f-a915-2616fc8b1671 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 09 16:26:25 compute-0 nova_compute[117331]: 2025-10-09 16:26:25.773 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating inventory in ProviderTree for provider 593051b8-2000-437f-a915-2616fc8b1671 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 09 16:26:25 compute-0 nova_compute[117331]: 2025-10-09 16:26:25.784 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing aggregate associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 09 16:26:25 compute-0 nova_compute[117331]: 2025-10-09 16:26:25.805 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing trait associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, traits: HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ARCH_X86_64,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_TIS,HW_ARCH_X86_64,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOUND_MODEL_AC97,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_RESCUE_BFV,COMPUTE_SOUND_MODEL_SB16,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 09 16:26:25 compute-0 nova_compute[117331]: 2025-10-09 16:26:25.860 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:26:26 compute-0 nova_compute[117331]: 2025-10-09 16:26:26.368 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:26:26 compute-0 nova_compute[117331]: 2025-10-09 16:26:26.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:26 compute-0 nova_compute[117331]: 2025-10-09 16:26:26.878 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:26:26 compute-0 nova_compute[117331]: 2025-10-09 16:26:26.878 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.730s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:26:27 compute-0 podman[147542]: 2025-10-09 16:26:27.835226732 +0000 UTC m=+0.073085032 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 09 16:26:27 compute-0 podman[147543]: 2025-10-09 16:26:27.878867577 +0000 UTC m=+0.110233501 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251007)
Oct 09 16:26:28 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 09 16:26:28 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 09 16:26:29 compute-0 kernel: tap909ef769-15: entered promiscuous mode
Oct 09 16:26:29 compute-0 NetworkManager[1028]: <info>  [1760027189.0956] manager: (tap909ef769-15): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Oct 09 16:26:29 compute-0 ovn_controller[19752]: 2025-10-09T16:26:29Z|00145|binding|INFO|Claiming lport 909ef769-15e5-4c3c-b440-6eb8f378e71f for this additional chassis.
Oct 09 16:26:29 compute-0 ovn_controller[19752]: 2025-10-09T16:26:29Z|00146|binding|INFO|909ef769-15e5-4c3c-b440-6eb8f378e71f: Claiming fa:16:3e:65:76:ba 10.100.0.7
Oct 09 16:26:29 compute-0 nova_compute[117331]: 2025-10-09 16:26:29.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:29 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:29.103 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:76:ba 10.100.0.7'], port_security=['fa:16:3e:65:76:ba 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b5296471-a1e3-40b2-8cee-8028a741040c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18ef4241-0151-441c-abdc-42d4b3a21b30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a345bddd4804404a55948133ea8150f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '9689e159-b15c-4e2f-9c7b-a0ccf3c3f578', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ed87cdf-61a6-4df0-ac74-37289a0bcd5f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=909ef769-15e5-4c3c-b440-6eb8f378e71f) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:26:29 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:29.105 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 909ef769-15e5-4c3c-b440-6eb8f378e71f in datapath 18ef4241-0151-441c-abdc-42d4b3a21b30 unbound from our chassis
Oct 09 16:26:29 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:29.106 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18ef4241-0151-441c-abdc-42d4b3a21b30
Oct 09 16:26:29 compute-0 systemd-udevd[147619]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:26:29 compute-0 ovn_controller[19752]: 2025-10-09T16:26:29Z|00147|binding|INFO|Setting lport 909ef769-15e5-4c3c-b440-6eb8f378e71f ovn-installed in OVS
Oct 09 16:26:29 compute-0 nova_compute[117331]: 2025-10-09 16:26:29.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:29 compute-0 nova_compute[117331]: 2025-10-09 16:26:29.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:29 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:29.134 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1ba56d-f03e-46e3-a6bf-2926f689da88]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:29 compute-0 nova_compute[117331]: 2025-10-09 16:26:29.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:29 compute-0 NetworkManager[1028]: <info>  [1760027189.1453] device (tap909ef769-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:26:29 compute-0 NetworkManager[1028]: <info>  [1760027189.1466] device (tap909ef769-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:26:29 compute-0 systemd-machined[77487]: New machine qemu-11-instance-00000011.
Oct 09 16:26:29 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:29.165 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[e254eea6-c158-4d2d-8392-3fb9f10fadd9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:29 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:29.168 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[57218a9c-14ec-4cce-b42c-d17463225387]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:29 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-00000011.
Oct 09 16:26:29 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:29.195 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[da213cbd-515c-47ec-9bcf-b0ab1270083f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:29 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:29.218 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[01a7444d-be89-4c6e-bd34-9822d1a9e975]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18ef4241-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:5e:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 208591, 'reachable_time': 40615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 147629, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:29 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:29.237 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc92c07-5448-407f-9318-d0d8810bc96b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap18ef4241-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 208602, 'tstamp': 208602}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 147634, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap18ef4241-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 208605, 'tstamp': 208605}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 147634, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:29 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:29.238 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18ef4241-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:26:29 compute-0 nova_compute[117331]: 2025-10-09 16:26:29.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:29 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:29.242 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18ef4241-00, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:26:29 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:29.242 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:26:29 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:29.242 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18ef4241-00, col_values=(('external_ids', {'iface-id': '2ea79927-f6b6-48ed-a992-4066429c8e5d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:26:29 compute-0 nova_compute[117331]: 2025-10-09 16:26:29.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:29 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:29.243 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:26:29 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:29.246 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[7c07762e-b252-49aa-b561-45915024c957]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-18ef4241-0151-441c-abdc-42d4b3a21b30\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 18ef4241-0151-441c-abdc-42d4b3a21b30\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:29 compute-0 nova_compute[117331]: 2025-10-09 16:26:29.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:29 compute-0 podman[127775]: time="2025-10-09T16:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:26:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:26:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3487 "" "Go-http-client/1.1"
Oct 09 16:26:29 compute-0 nova_compute[117331]: 2025-10-09 16:26:29.879 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:26:29 compute-0 nova_compute[117331]: 2025-10-09 16:26:29.879 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:26:30 compute-0 nova_compute[117331]: 2025-10-09 16:26:30.390 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:26:30 compute-0 nova_compute[117331]: 2025-10-09 16:26:30.390 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:26:30 compute-0 nova_compute[117331]: 2025-10-09 16:26:30.391 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:26:30 compute-0 nova_compute[117331]: 2025-10-09 16:26:30.391 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:26:30 compute-0 nova_compute[117331]: 2025-10-09 16:26:30.391 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:26:31 compute-0 openstack_network_exporter[129925]: ERROR   16:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:26:31 compute-0 openstack_network_exporter[129925]: ERROR   16:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:26:31 compute-0 openstack_network_exporter[129925]: ERROR   16:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:26:31 compute-0 openstack_network_exporter[129925]: ERROR   16:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:26:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:26:31 compute-0 openstack_network_exporter[129925]: ERROR   16:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:26:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:26:31 compute-0 nova_compute[117331]: 2025-10-09 16:26:31.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:32 compute-0 ovn_controller[19752]: 2025-10-09T16:26:32Z|00148|binding|INFO|Claiming lport 909ef769-15e5-4c3c-b440-6eb8f378e71f for this chassis.
Oct 09 16:26:32 compute-0 ovn_controller[19752]: 2025-10-09T16:26:32Z|00149|binding|INFO|909ef769-15e5-4c3c-b440-6eb8f378e71f: Claiming fa:16:3e:65:76:ba 10.100.0.7
Oct 09 16:26:32 compute-0 ovn_controller[19752]: 2025-10-09T16:26:32Z|00150|binding|INFO|Setting lport 909ef769-15e5-4c3c-b440-6eb8f378e71f up in Southbound
Oct 09 16:26:32 compute-0 unix_chkpwd[147661]: password check failed for user (root)
Oct 09 16:26:32 compute-0 sshd-session[147659]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 09 16:26:33 compute-0 nova_compute[117331]: 2025-10-09 16:26:33.333 2 INFO nova.compute.manager [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Post operation of migration started
Oct 09 16:26:33 compute-0 nova_compute[117331]: 2025-10-09 16:26:33.334 2 WARNING neutronclient.v2_0.client [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:26:33 compute-0 nova_compute[117331]: 2025-10-09 16:26:33.989 2 WARNING neutronclient.v2_0.client [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:26:33 compute-0 nova_compute[117331]: 2025-10-09 16:26:33.990 2 WARNING neutronclient.v2_0.client [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:26:34 compute-0 nova_compute[117331]: 2025-10-09 16:26:34.089 2 DEBUG oslo_concurrency.lockutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-b5296471-a1e3-40b2-8cee-8028a741040c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:26:34 compute-0 nova_compute[117331]: 2025-10-09 16:26:34.090 2 DEBUG oslo_concurrency.lockutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-b5296471-a1e3-40b2-8cee-8028a741040c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:26:34 compute-0 nova_compute[117331]: 2025-10-09 16:26:34.090 2 DEBUG nova.network.neutron [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:26:34 compute-0 nova_compute[117331]: 2025-10-09 16:26:34.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:34 compute-0 sshd-session[147659]: Failed password for root from 91.224.92.108 port 41426 ssh2
Oct 09 16:26:34 compute-0 nova_compute[117331]: 2025-10-09 16:26:34.596 2 WARNING neutronclient.v2_0.client [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:26:34 compute-0 unix_chkpwd[147662]: password check failed for user (root)
Oct 09 16:26:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:35.314 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:26:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:35.315 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:26:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:35.316 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:26:35 compute-0 nova_compute[117331]: 2025-10-09 16:26:35.320 2 WARNING neutronclient.v2_0.client [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:26:35 compute-0 nova_compute[117331]: 2025-10-09 16:26:35.490 2 DEBUG nova.network.neutron [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Updating instance_info_cache with network_info: [{"id": "909ef769-15e5-4c3c-b440-6eb8f378e71f", "address": "fa:16:3e:65:76:ba", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap909ef769-15", "ovs_interfaceid": "909ef769-15e5-4c3c-b440-6eb8f378e71f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:26:35 compute-0 nova_compute[117331]: 2025-10-09 16:26:35.996 2 DEBUG oslo_concurrency.lockutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-b5296471-a1e3-40b2-8cee-8028a741040c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:26:36 compute-0 sshd-session[147659]: Failed password for root from 91.224.92.108 port 41426 ssh2
Oct 09 16:26:36 compute-0 nova_compute[117331]: 2025-10-09 16:26:36.512 2 DEBUG oslo_concurrency.lockutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:26:36 compute-0 nova_compute[117331]: 2025-10-09 16:26:36.513 2 DEBUG oslo_concurrency.lockutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:26:36 compute-0 nova_compute[117331]: 2025-10-09 16:26:36.514 2 DEBUG oslo_concurrency.lockutils [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:26:36 compute-0 nova_compute[117331]: 2025-10-09 16:26:36.520 2 INFO nova.virt.libvirt.driver [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 09 16:26:36 compute-0 virtqemud[117629]: Domain id=11 name='instance-00000011' uuid=b5296471-a1e3-40b2-8cee-8028a741040c is tainted: custom-monitor
Oct 09 16:26:36 compute-0 nova_compute[117331]: 2025-10-09 16:26:36.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:36 compute-0 unix_chkpwd[147664]: password check failed for user (root)
Oct 09 16:26:37 compute-0 nova_compute[117331]: 2025-10-09 16:26:37.529 2 INFO nova.virt.libvirt.driver [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 09 16:26:38 compute-0 sshd-session[147659]: Failed password for root from 91.224.92.108 port 41426 ssh2
Oct 09 16:26:38 compute-0 nova_compute[117331]: 2025-10-09 16:26:38.535 2 INFO nova.virt.libvirt.driver [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 09 16:26:38 compute-0 nova_compute[117331]: 2025-10-09 16:26:38.540 2 DEBUG nova.compute.manager [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:26:39 compute-0 nova_compute[117331]: 2025-10-09 16:26:39.050 2 DEBUG nova.objects.instance [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 09 16:26:39 compute-0 sshd-session[147659]: Received disconnect from 91.224.92.108 port 41426:11:  [preauth]
Oct 09 16:26:39 compute-0 sshd-session[147659]: Disconnected from authenticating user root 91.224.92.108 port 41426 [preauth]
Oct 09 16:26:39 compute-0 sshd-session[147659]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 09 16:26:39 compute-0 nova_compute[117331]: 2025-10-09 16:26:39.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:40 compute-0 unix_chkpwd[147667]: password check failed for user (root)
Oct 09 16:26:40 compute-0 sshd-session[147665]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 09 16:26:40 compute-0 nova_compute[117331]: 2025-10-09 16:26:40.069 2 WARNING neutronclient.v2_0.client [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:26:40 compute-0 nova_compute[117331]: 2025-10-09 16:26:40.134 2 WARNING neutronclient.v2_0.client [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:26:40 compute-0 nova_compute[117331]: 2025-10-09 16:26:40.135 2 WARNING neutronclient.v2_0.client [None req-d3cbdfed-9534-4436-a3db-56fbfb0d73fc 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:26:40 compute-0 podman[147676]: 2025-10-09 16:26:40.856827725 +0000 UTC m=+0.076338216 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 09 16:26:41 compute-0 nova_compute[117331]: 2025-10-09 16:26:41.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:42 compute-0 sshd-session[147665]: Failed password for root from 91.224.92.108 port 51892 ssh2
Oct 09 16:26:42 compute-0 unix_chkpwd[147697]: password check failed for user (root)
Oct 09 16:26:43 compute-0 nova_compute[117331]: 2025-10-09 16:26:43.838 2 DEBUG oslo_concurrency.lockutils [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "b5296471-a1e3-40b2-8cee-8028a741040c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:26:43 compute-0 nova_compute[117331]: 2025-10-09 16:26:43.839 2 DEBUG oslo_concurrency.lockutils [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "b5296471-a1e3-40b2-8cee-8028a741040c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:26:43 compute-0 nova_compute[117331]: 2025-10-09 16:26:43.840 2 DEBUG oslo_concurrency.lockutils [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "b5296471-a1e3-40b2-8cee-8028a741040c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:26:43 compute-0 nova_compute[117331]: 2025-10-09 16:26:43.840 2 DEBUG oslo_concurrency.lockutils [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "b5296471-a1e3-40b2-8cee-8028a741040c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:26:43 compute-0 nova_compute[117331]: 2025-10-09 16:26:43.840 2 DEBUG oslo_concurrency.lockutils [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "b5296471-a1e3-40b2-8cee-8028a741040c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:26:43 compute-0 nova_compute[117331]: 2025-10-09 16:26:43.859 2 INFO nova.compute.manager [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Terminating instance
Oct 09 16:26:43 compute-0 podman[147698]: 2025-10-09 16:26:43.866512657 +0000 UTC m=+0.100322877 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:26:44 compute-0 sshd-session[147665]: Failed password for root from 91.224.92.108 port 51892 ssh2
Oct 09 16:26:44 compute-0 nova_compute[117331]: 2025-10-09 16:26:44.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:44 compute-0 nova_compute[117331]: 2025-10-09 16:26:44.386 2 DEBUG nova.compute.manager [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 09 16:26:44 compute-0 kernel: tap909ef769-15 (unregistering): left promiscuous mode
Oct 09 16:26:44 compute-0 NetworkManager[1028]: <info>  [1760027204.4110] device (tap909ef769-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:26:44 compute-0 nova_compute[117331]: 2025-10-09 16:26:44.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:44 compute-0 ovn_controller[19752]: 2025-10-09T16:26:44Z|00151|binding|INFO|Releasing lport 909ef769-15e5-4c3c-b440-6eb8f378e71f from this chassis (sb_readonly=0)
Oct 09 16:26:44 compute-0 ovn_controller[19752]: 2025-10-09T16:26:44Z|00152|binding|INFO|Setting lport 909ef769-15e5-4c3c-b440-6eb8f378e71f down in Southbound
Oct 09 16:26:44 compute-0 ovn_controller[19752]: 2025-10-09T16:26:44Z|00153|binding|INFO|Removing iface tap909ef769-15 ovn-installed in OVS
Oct 09 16:26:44 compute-0 nova_compute[117331]: 2025-10-09 16:26:44.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:44.430 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:76:ba 10.100.0.7'], port_security=['fa:16:3e:65:76:ba 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b5296471-a1e3-40b2-8cee-8028a741040c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18ef4241-0151-441c-abdc-42d4b3a21b30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a345bddd4804404a55948133ea8150f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '9689e159-b15c-4e2f-9c7b-a0ccf3c3f578', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ed87cdf-61a6-4df0-ac74-37289a0bcd5f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=909ef769-15e5-4c3c-b440-6eb8f378e71f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:26:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:44.432 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 909ef769-15e5-4c3c-b440-6eb8f378e71f in datapath 18ef4241-0151-441c-abdc-42d4b3a21b30 unbound from our chassis
Oct 09 16:26:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:44.435 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18ef4241-0151-441c-abdc-42d4b3a21b30
Oct 09 16:26:44 compute-0 unix_chkpwd[147726]: password check failed for user (root)
Oct 09 16:26:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:44.462 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[9bada119-58eb-43de-a0fd-0e8229bf38f5]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:44 compute-0 nova_compute[117331]: 2025-10-09 16:26:44.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:44.508 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[88cb514e-16b1-4afd-a180-df1f421bc6b9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:44.513 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[3e954e76-5a8c-4a24-99d9-0ea240d31030]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:44 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000011.scope: Deactivated successfully.
Oct 09 16:26:44 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000011.scope: Consumed 2.600s CPU time.
Oct 09 16:26:44 compute-0 systemd-machined[77487]: Machine qemu-11-instance-00000011 terminated.
Oct 09 16:26:44 compute-0 nova_compute[117331]: 2025-10-09 16:26:44.560 2 DEBUG nova.compute.manager [req-840b6b83-228c-424f-8c95-ad1f29e4517b req-b851e3dd-4de3-4273-a989-f33d11df53b1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Received event network-vif-unplugged-909ef769-15e5-4c3c-b440-6eb8f378e71f external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:26:44 compute-0 nova_compute[117331]: 2025-10-09 16:26:44.561 2 DEBUG oslo_concurrency.lockutils [req-840b6b83-228c-424f-8c95-ad1f29e4517b req-b851e3dd-4de3-4273-a989-f33d11df53b1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "b5296471-a1e3-40b2-8cee-8028a741040c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:26:44 compute-0 nova_compute[117331]: 2025-10-09 16:26:44.561 2 DEBUG oslo_concurrency.lockutils [req-840b6b83-228c-424f-8c95-ad1f29e4517b req-b851e3dd-4de3-4273-a989-f33d11df53b1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "b5296471-a1e3-40b2-8cee-8028a741040c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:26:44 compute-0 nova_compute[117331]: 2025-10-09 16:26:44.561 2 DEBUG oslo_concurrency.lockutils [req-840b6b83-228c-424f-8c95-ad1f29e4517b req-b851e3dd-4de3-4273-a989-f33d11df53b1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "b5296471-a1e3-40b2-8cee-8028a741040c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:26:44 compute-0 nova_compute[117331]: 2025-10-09 16:26:44.562 2 DEBUG nova.compute.manager [req-840b6b83-228c-424f-8c95-ad1f29e4517b req-b851e3dd-4de3-4273-a989-f33d11df53b1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] No waiting events found dispatching network-vif-unplugged-909ef769-15e5-4c3c-b440-6eb8f378e71f pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:26:44 compute-0 nova_compute[117331]: 2025-10-09 16:26:44.562 2 DEBUG nova.compute.manager [req-840b6b83-228c-424f-8c95-ad1f29e4517b req-b851e3dd-4de3-4273-a989-f33d11df53b1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Received event network-vif-unplugged-909ef769-15e5-4c3c-b440-6eb8f378e71f for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:26:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:44.562 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd28412-04f5-4c2b-84de-8d9d2e97a82a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:44.589 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc37c96-45a4-409e-b345-28d74fe499fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18ef4241-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:5e:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 208591, 'reachable_time': 40615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 147736, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:44.612 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[37899663-3da0-4ea2-a8a4-267da4281d32]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap18ef4241-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 208602, 'tstamp': 208602}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 147737, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap18ef4241-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 208605, 'tstamp': 208605}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 147737, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:44.614 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18ef4241-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:26:44 compute-0 nova_compute[117331]: 2025-10-09 16:26:44.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:44 compute-0 nova_compute[117331]: 2025-10-09 16:26:44.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:44.626 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18ef4241-00, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:26:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:44.627 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:26:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:44.627 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18ef4241-00, col_values=(('external_ids', {'iface-id': '2ea79927-f6b6-48ed-a992-4066429c8e5d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:26:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:44.628 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:26:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:44.630 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[7fa74f91-bb1e-4d9e-aa38-96152470cb99]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-18ef4241-0151-441c-abdc-42d4b3a21b30\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 18ef4241-0151-441c-abdc-42d4b3a21b30\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:44 compute-0 nova_compute[117331]: 2025-10-09 16:26:44.661 2 INFO nova.virt.libvirt.driver [-] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Instance destroyed successfully.
Oct 09 16:26:44 compute-0 nova_compute[117331]: 2025-10-09 16:26:44.661 2 DEBUG nova.objects.instance [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lazy-loading 'resources' on Instance uuid b5296471-a1e3-40b2-8cee-8028a741040c obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.167 2 DEBUG nova.virt.libvirt.vif [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=1,config_drive='True',created_at=2025-10-09T16:25:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-1733509377',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-173',id=17,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:26:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a345bddd4804404a55948133ea8150f',ramdisk_id='',reservation_id='r-ew4rueq2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',clean_attempts='1',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:26:39Z,user_data=None,user_id='c2c0f7c2f15e4da6881dc393064b0e16',uuid=b5296471-a1e3-40b2-8cee-8028a741040c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "909ef769-15e5-4c3c-b440-6eb8f378e71f", "address": "fa:16:3e:65:76:ba", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap909ef769-15", "ovs_interfaceid": "909ef769-15e5-4c3c-b440-6eb8f378e71f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.167 2 DEBUG nova.network.os_vif_util [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Converting VIF {"id": "909ef769-15e5-4c3c-b440-6eb8f378e71f", "address": "fa:16:3e:65:76:ba", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap909ef769-15", "ovs_interfaceid": "909ef769-15e5-4c3c-b440-6eb8f378e71f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.169 2 DEBUG nova.network.os_vif_util [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:76:ba,bridge_name='br-int',has_traffic_filtering=True,id=909ef769-15e5-4c3c-b440-6eb8f378e71f,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap909ef769-15') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.169 2 DEBUG os_vif [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:76:ba,bridge_name='br-int',has_traffic_filtering=True,id=909ef769-15e5-4c3c-b440-6eb8f378e71f,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap909ef769-15') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.173 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap909ef769-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.180 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=fd281c9b-3fb2-4411-b6fb-4da49bca9c8a) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.186 2 INFO os_vif [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:76:ba,bridge_name='br-int',has_traffic_filtering=True,id=909ef769-15e5-4c3c-b440-6eb8f378e71f,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap909ef769-15')
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.187 2 INFO nova.virt.libvirt.driver [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Deleting instance files /var/lib/nova/instances/b5296471-a1e3-40b2-8cee-8028a741040c_del
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.188 2 INFO nova.virt.libvirt.driver [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Deletion of /var/lib/nova/instances/b5296471-a1e3-40b2-8cee-8028a741040c_del complete
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.701 2 INFO nova.compute.manager [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Took 1.31 seconds to destroy the instance on the hypervisor.
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.702 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.702 2 DEBUG nova.compute.manager [-] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.702 2 DEBUG nova.network.neutron [-] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.702 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:26:45 compute-0 nova_compute[117331]: 2025-10-09 16:26:45.969 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:26:46 compute-0 sshd-session[147665]: Failed password for root from 91.224.92.108 port 51892 ssh2
Oct 09 16:26:46 compute-0 nova_compute[117331]: 2025-10-09 16:26:46.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:46 compute-0 sshd-session[147665]: Received disconnect from 91.224.92.108 port 51892:11:  [preauth]
Oct 09 16:26:46 compute-0 sshd-session[147665]: Disconnected from authenticating user root 91.224.92.108 port 51892 [preauth]
Oct 09 16:26:46 compute-0 sshd-session[147665]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 09 16:26:46 compute-0 nova_compute[117331]: 2025-10-09 16:26:46.649 2 DEBUG nova.compute.manager [req-a070e054-b686-4d97-b2f3-dcb498a42430 req-3fbb8b10-f257-4be7-b882-e68eddeb2d34 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Received event network-vif-unplugged-909ef769-15e5-4c3c-b440-6eb8f378e71f external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:26:46 compute-0 nova_compute[117331]: 2025-10-09 16:26:46.650 2 DEBUG oslo_concurrency.lockutils [req-a070e054-b686-4d97-b2f3-dcb498a42430 req-3fbb8b10-f257-4be7-b882-e68eddeb2d34 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "b5296471-a1e3-40b2-8cee-8028a741040c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:26:46 compute-0 nova_compute[117331]: 2025-10-09 16:26:46.650 2 DEBUG oslo_concurrency.lockutils [req-a070e054-b686-4d97-b2f3-dcb498a42430 req-3fbb8b10-f257-4be7-b882-e68eddeb2d34 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "b5296471-a1e3-40b2-8cee-8028a741040c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:26:46 compute-0 nova_compute[117331]: 2025-10-09 16:26:46.650 2 DEBUG oslo_concurrency.lockutils [req-a070e054-b686-4d97-b2f3-dcb498a42430 req-3fbb8b10-f257-4be7-b882-e68eddeb2d34 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "b5296471-a1e3-40b2-8cee-8028a741040c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:26:46 compute-0 nova_compute[117331]: 2025-10-09 16:26:46.651 2 DEBUG nova.compute.manager [req-a070e054-b686-4d97-b2f3-dcb498a42430 req-3fbb8b10-f257-4be7-b882-e68eddeb2d34 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] No waiting events found dispatching network-vif-unplugged-909ef769-15e5-4c3c-b440-6eb8f378e71f pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:26:46 compute-0 nova_compute[117331]: 2025-10-09 16:26:46.651 2 DEBUG nova.compute.manager [req-a070e054-b686-4d97-b2f3-dcb498a42430 req-3fbb8b10-f257-4be7-b882-e68eddeb2d34 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Received event network-vif-unplugged-909ef769-15e5-4c3c-b440-6eb8f378e71f for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:26:46 compute-0 nova_compute[117331]: 2025-10-09 16:26:46.651 2 DEBUG nova.compute.manager [req-a070e054-b686-4d97-b2f3-dcb498a42430 req-3fbb8b10-f257-4be7-b882-e68eddeb2d34 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Received event network-vif-deleted-909ef769-15e5-4c3c-b440-6eb8f378e71f external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:26:46 compute-0 nova_compute[117331]: 2025-10-09 16:26:46.652 2 INFO nova.compute.manager [req-a070e054-b686-4d97-b2f3-dcb498a42430 req-3fbb8b10-f257-4be7-b882-e68eddeb2d34 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Neutron deleted interface 909ef769-15e5-4c3c-b440-6eb8f378e71f; detaching it from the instance and deleting it from the info cache
Oct 09 16:26:46 compute-0 nova_compute[117331]: 2025-10-09 16:26:46.652 2 DEBUG nova.network.neutron [req-a070e054-b686-4d97-b2f3-dcb498a42430 req-3fbb8b10-f257-4be7-b882-e68eddeb2d34 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:26:46 compute-0 nova_compute[117331]: 2025-10-09 16:26:46.773 2 DEBUG nova.network.neutron [-] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:26:47 compute-0 nova_compute[117331]: 2025-10-09 16:26:47.160 2 DEBUG nova.compute.manager [req-a070e054-b686-4d97-b2f3-dcb498a42430 req-3fbb8b10-f257-4be7-b882-e68eddeb2d34 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Detach interface failed, port_id=909ef769-15e5-4c3c-b440-6eb8f378e71f, reason: Instance b5296471-a1e3-40b2-8cee-8028a741040c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 09 16:26:47 compute-0 nova_compute[117331]: 2025-10-09 16:26:47.280 2 INFO nova.compute.manager [-] [instance: b5296471-a1e3-40b2-8cee-8028a741040c] Took 1.58 seconds to deallocate network for instance.
Oct 09 16:26:47 compute-0 unix_chkpwd[147758]: password check failed for user (root)
Oct 09 16:26:47 compute-0 sshd-session[147756]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 09 16:26:47 compute-0 nova_compute[117331]: 2025-10-09 16:26:47.800 2 DEBUG oslo_concurrency.lockutils [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:26:47 compute-0 nova_compute[117331]: 2025-10-09 16:26:47.801 2 DEBUG oslo_concurrency.lockutils [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:26:47 compute-0 nova_compute[117331]: 2025-10-09 16:26:47.806 2 DEBUG oslo_concurrency.lockutils [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:26:47 compute-0 nova_compute[117331]: 2025-10-09 16:26:47.857 2 INFO nova.scheduler.client.report [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Deleted allocations for instance b5296471-a1e3-40b2-8cee-8028a741040c
Oct 09 16:26:48 compute-0 nova_compute[117331]: 2025-10-09 16:26:48.899 2 DEBUG oslo_concurrency.lockutils [None req-2a7ed071-bb02-4500-af48-8149dc067548 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "b5296471-a1e3-40b2-8cee-8028a741040c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.060s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:26:49 compute-0 sshd-session[147756]: Failed password for root from 91.224.92.108 port 55742 ssh2
Oct 09 16:26:49 compute-0 unix_chkpwd[147759]: password check failed for user (root)
Oct 09 16:26:49 compute-0 nova_compute[117331]: 2025-10-09 16:26:49.774 2 DEBUG oslo_concurrency.lockutils [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "da070580-b1f8-4571-b524-ce73d61665df" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:26:49 compute-0 nova_compute[117331]: 2025-10-09 16:26:49.775 2 DEBUG oslo_concurrency.lockutils [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "da070580-b1f8-4571-b524-ce73d61665df" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:26:49 compute-0 nova_compute[117331]: 2025-10-09 16:26:49.776 2 DEBUG oslo_concurrency.lockutils [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "da070580-b1f8-4571-b524-ce73d61665df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:26:49 compute-0 nova_compute[117331]: 2025-10-09 16:26:49.776 2 DEBUG oslo_concurrency.lockutils [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "da070580-b1f8-4571-b524-ce73d61665df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:26:49 compute-0 nova_compute[117331]: 2025-10-09 16:26:49.776 2 DEBUG oslo_concurrency.lockutils [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "da070580-b1f8-4571-b524-ce73d61665df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:26:49 compute-0 nova_compute[117331]: 2025-10-09 16:26:49.794 2 INFO nova.compute.manager [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Terminating instance
Oct 09 16:26:49 compute-0 podman[147761]: 2025-10-09 16:26:49.851407872 +0000 UTC m=+0.069427384 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 09 16:26:49 compute-0 podman[147760]: 2025-10-09 16:26:49.875565854 +0000 UTC m=+0.097536390 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 16:26:50 compute-0 nova_compute[117331]: 2025-10-09 16:26:50.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:50 compute-0 nova_compute[117331]: 2025-10-09 16:26:50.315 2 DEBUG nova.compute.manager [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 09 16:26:50 compute-0 kernel: tapc06c63eb-93 (unregistering): left promiscuous mode
Oct 09 16:26:50 compute-0 NetworkManager[1028]: <info>  [1760027210.3443] device (tapc06c63eb-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:26:50 compute-0 ovn_controller[19752]: 2025-10-09T16:26:50Z|00154|binding|INFO|Releasing lport c06c63eb-935d-42e3-9672-1d18b6bca7ad from this chassis (sb_readonly=0)
Oct 09 16:26:50 compute-0 ovn_controller[19752]: 2025-10-09T16:26:50Z|00155|binding|INFO|Setting lport c06c63eb-935d-42e3-9672-1d18b6bca7ad down in Southbound
Oct 09 16:26:50 compute-0 ovn_controller[19752]: 2025-10-09T16:26:50Z|00156|binding|INFO|Removing iface tapc06c63eb-93 ovn-installed in OVS
Oct 09 16:26:50 compute-0 nova_compute[117331]: 2025-10-09 16:26:50.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:50 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:50.358 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:af:77 10.100.0.13'], port_security=['fa:16:3e:49:af:77 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'da070580-b1f8-4571-b524-ce73d61665df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18ef4241-0151-441c-abdc-42d4b3a21b30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a345bddd4804404a55948133ea8150f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9689e159-b15c-4e2f-9c7b-a0ccf3c3f578', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ed87cdf-61a6-4df0-ac74-37289a0bcd5f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=c06c63eb-935d-42e3-9672-1d18b6bca7ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:26:50 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:50.359 28613 INFO neutron.agent.ovn.metadata.agent [-] Port c06c63eb-935d-42e3-9672-1d18b6bca7ad in datapath 18ef4241-0151-441c-abdc-42d4b3a21b30 unbound from our chassis
Oct 09 16:26:50 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:50.362 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 18ef4241-0151-441c-abdc-42d4b3a21b30, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:26:50 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:50.363 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6827a6-266f-48b7-81fe-15045878fef0]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:50 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:50.364 28613 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30 namespace which is not needed anymore
Oct 09 16:26:50 compute-0 nova_compute[117331]: 2025-10-09 16:26:50.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:50 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000010.scope: Deactivated successfully.
Oct 09 16:26:50 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000010.scope: Consumed 14.576s CPU time.
Oct 09 16:26:50 compute-0 systemd-machined[77487]: Machine qemu-10-instance-00000010 terminated.
Oct 09 16:26:50 compute-0 nova_compute[117331]: 2025-10-09 16:26:50.480 2 DEBUG nova.compute.manager [req-ec90ec53-3760-4744-8330-4fafd8879a3e req-22f15069-9084-4655-8e2c-0c3cb3753c7c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Received event network-vif-unplugged-c06c63eb-935d-42e3-9672-1d18b6bca7ad external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:26:50 compute-0 nova_compute[117331]: 2025-10-09 16:26:50.481 2 DEBUG oslo_concurrency.lockutils [req-ec90ec53-3760-4744-8330-4fafd8879a3e req-22f15069-9084-4655-8e2c-0c3cb3753c7c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "da070580-b1f8-4571-b524-ce73d61665df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:26:50 compute-0 nova_compute[117331]: 2025-10-09 16:26:50.481 2 DEBUG oslo_concurrency.lockutils [req-ec90ec53-3760-4744-8330-4fafd8879a3e req-22f15069-9084-4655-8e2c-0c3cb3753c7c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "da070580-b1f8-4571-b524-ce73d61665df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:26:50 compute-0 nova_compute[117331]: 2025-10-09 16:26:50.482 2 DEBUG oslo_concurrency.lockutils [req-ec90ec53-3760-4744-8330-4fafd8879a3e req-22f15069-9084-4655-8e2c-0c3cb3753c7c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "da070580-b1f8-4571-b524-ce73d61665df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:26:50 compute-0 nova_compute[117331]: 2025-10-09 16:26:50.482 2 DEBUG nova.compute.manager [req-ec90ec53-3760-4744-8330-4fafd8879a3e req-22f15069-9084-4655-8e2c-0c3cb3753c7c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] No waiting events found dispatching network-vif-unplugged-c06c63eb-935d-42e3-9672-1d18b6bca7ad pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:26:50 compute-0 nova_compute[117331]: 2025-10-09 16:26:50.482 2 DEBUG nova.compute.manager [req-ec90ec53-3760-4744-8330-4fafd8879a3e req-22f15069-9084-4655-8e2c-0c3cb3753c7c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Received event network-vif-unplugged-c06c63eb-935d-42e3-9672-1d18b6bca7ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:26:50 compute-0 neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30[147319]: [NOTICE]   (147323) : haproxy version is 3.0.5-8e879a5
Oct 09 16:26:50 compute-0 neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30[147319]: [NOTICE]   (147323) : path to executable is /usr/sbin/haproxy
Oct 09 16:26:50 compute-0 neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30[147319]: [WARNING]  (147323) : Exiting Master process...
Oct 09 16:26:50 compute-0 podman[147825]: 2025-10-09 16:26:50.491206843 +0000 UTC m=+0.028762280 container kill e8269e920b5522cbba4cf2ad78c7368a87540711f08e7ef23dc6d78d32018dc0 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251007)
Oct 09 16:26:50 compute-0 neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30[147319]: [ALERT]    (147323) : Current worker (147325) exited with code 143 (Terminated)
Oct 09 16:26:50 compute-0 neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30[147319]: [WARNING]  (147323) : All workers exited. Exiting... (0)
Oct 09 16:26:50 compute-0 systemd[1]: libpod-e8269e920b5522cbba4cf2ad78c7368a87540711f08e7ef23dc6d78d32018dc0.scope: Deactivated successfully.
Oct 09 16:26:50 compute-0 podman[147841]: 2025-10-09 16:26:50.541527801 +0000 UTC m=+0.031427973 container died e8269e920b5522cbba4cf2ad78c7368a87540711f08e7ef23dc6d78d32018dc0 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 09 16:26:50 compute-0 nova_compute[117331]: 2025-10-09 16:26:50.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4569d2da3806b3a50dcace2435eb4a9eb13bd949c43ce36edbd17422be35176-merged.mount: Deactivated successfully.
Oct 09 16:26:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8269e920b5522cbba4cf2ad78c7368a87540711f08e7ef23dc6d78d32018dc0-userdata-shm.mount: Deactivated successfully.
Oct 09 16:26:50 compute-0 nova_compute[117331]: 2025-10-09 16:26:50.599 2 INFO nova.virt.libvirt.driver [-] [instance: da070580-b1f8-4571-b524-ce73d61665df] Instance destroyed successfully.
Oct 09 16:26:50 compute-0 nova_compute[117331]: 2025-10-09 16:26:50.599 2 DEBUG nova.objects.instance [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lazy-loading 'resources' on Instance uuid da070580-b1f8-4571-b524-ce73d61665df obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:26:50 compute-0 podman[147841]: 2025-10-09 16:26:50.605348426 +0000 UTC m=+0.095248578 container cleanup e8269e920b5522cbba4cf2ad78c7368a87540711f08e7ef23dc6d78d32018dc0 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0)
Oct 09 16:26:50 compute-0 systemd[1]: libpod-conmon-e8269e920b5522cbba4cf2ad78c7368a87540711f08e7ef23dc6d78d32018dc0.scope: Deactivated successfully.
Oct 09 16:26:50 compute-0 podman[147843]: 2025-10-09 16:26:50.625273046 +0000 UTC m=+0.109603432 container remove e8269e920b5522cbba4cf2ad78c7368a87540711f08e7ef23dc6d78d32018dc0 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:26:50 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:50.648 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa4bfdf-7b1f-40f5-8f46-473faf78c089]: (4, ("Thu Oct  9 04:26:50 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30 (e8269e920b5522cbba4cf2ad78c7368a87540711f08e7ef23dc6d78d32018dc0)\ne8269e920b5522cbba4cf2ad78c7368a87540711f08e7ef23dc6d78d32018dc0\nThu Oct  9 04:26:50 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30 (e8269e920b5522cbba4cf2ad78c7368a87540711f08e7ef23dc6d78d32018dc0)\ne8269e920b5522cbba4cf2ad78c7368a87540711f08e7ef23dc6d78d32018dc0\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:50 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:50.650 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[8861ac35-d75e-4982-a99d-b5cec3bdea0f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:50 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:50.651 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:26:50 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:50.651 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[74627727-cd85-4a8c-a550-33ad07af3315]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:50 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:50.651 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18ef4241-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:26:50 compute-0 nova_compute[117331]: 2025-10-09 16:26:50.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:50 compute-0 kernel: tap18ef4241-00: left promiscuous mode
Oct 09 16:26:50 compute-0 nova_compute[117331]: 2025-10-09 16:26:50.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:50 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:50.672 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[16653f56-63d5-4130-80fe-e5364f1e2f62]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:50 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:50.697 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a2fc089e-4f71-4ea6-a0ea-54c11fea5c19]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:50 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:50.699 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[c18bf0c3-1029-4210-8c71-f9ce0c1ca9a6]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:50 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:50.722 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2652d9-0678-4c84-8dbb-08147ae21f52]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 208583, 'reachable_time': 43838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 147896, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:50 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:50.725 28727 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 09 16:26:50 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:26:50.726 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c6655a-e334-48a4-99ea-4a15822d7f24]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:26:50 compute-0 systemd[1]: run-netns-ovnmeta\x2d18ef4241\x2d0151\x2d441c\x2dabdc\x2d42d4b3a21b30.mount: Deactivated successfully.
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.107 2 DEBUG nova.virt.libvirt.vif [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:25:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-978482825',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-978',id=16,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:25:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a345bddd4804404a55948133ea8150f',ramdisk_id='',reservation_id='r-3cl3cajx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:25:44Z,user_data=None,user_id='c2c0f7c2f15e4da6881dc393064b0e16',uuid=da070580-b1f8-4571-b524-ce73d61665df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "address": "fa:16:3e:49:af:77", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc06c63eb-93", "ovs_interfaceid": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.107 2 DEBUG nova.network.os_vif_util [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Converting VIF {"id": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "address": "fa:16:3e:49:af:77", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc06c63eb-93", "ovs_interfaceid": "c06c63eb-935d-42e3-9672-1d18b6bca7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.107 2 DEBUG nova.network.os_vif_util [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:af:77,bridge_name='br-int',has_traffic_filtering=True,id=c06c63eb-935d-42e3-9672-1d18b6bca7ad,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc06c63eb-93') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.108 2 DEBUG os_vif [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:af:77,bridge_name='br-int',has_traffic_filtering=True,id=c06c63eb-935d-42e3-9672-1d18b6bca7ad,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc06c63eb-93') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.109 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc06c63eb-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.114 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=2de89947-7f29-4c6b-a4d4-cc9efaf1e4e1) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.118 2 INFO os_vif [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:af:77,bridge_name='br-int',has_traffic_filtering=True,id=c06c63eb-935d-42e3-9672-1d18b6bca7ad,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc06c63eb-93')
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.119 2 INFO nova.virt.libvirt.driver [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Deleting instance files /var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df_del
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.120 2 INFO nova.virt.libvirt.driver [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Deletion of /var/lib/nova/instances/da070580-b1f8-4571-b524-ce73d61665df_del complete
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.666 2 INFO nova.compute.manager [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Took 1.35 seconds to destroy the instance on the hypervisor.
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.666 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.666 2 DEBUG nova.compute.manager [-] [instance: da070580-b1f8-4571-b524-ce73d61665df] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.666 2 DEBUG nova.network.neutron [-] [instance: da070580-b1f8-4571-b524-ce73d61665df] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.667 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:26:51 compute-0 sshd-session[147756]: Failed password for root from 91.224.92.108 port 55742 ssh2
Oct 09 16:26:51 compute-0 nova_compute[117331]: 2025-10-09 16:26:51.871 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:26:51 compute-0 unix_chkpwd[147898]: password check failed for user (root)
Oct 09 16:26:52 compute-0 nova_compute[117331]: 2025-10-09 16:26:52.257 2 DEBUG nova.compute.manager [req-da65de4a-b428-4f1b-be64-2c2847f8f85f req-23419a31-1f5e-46db-aeab-b65c757ff146 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Received event network-vif-deleted-c06c63eb-935d-42e3-9672-1d18b6bca7ad external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:26:52 compute-0 nova_compute[117331]: 2025-10-09 16:26:52.257 2 INFO nova.compute.manager [req-da65de4a-b428-4f1b-be64-2c2847f8f85f req-23419a31-1f5e-46db-aeab-b65c757ff146 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Neutron deleted interface c06c63eb-935d-42e3-9672-1d18b6bca7ad; detaching it from the instance and deleting it from the info cache
Oct 09 16:26:52 compute-0 nova_compute[117331]: 2025-10-09 16:26:52.258 2 DEBUG nova.network.neutron [req-da65de4a-b428-4f1b-be64-2c2847f8f85f req-23419a31-1f5e-46db-aeab-b65c757ff146 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:26:52 compute-0 nova_compute[117331]: 2025-10-09 16:26:52.540 2 DEBUG nova.compute.manager [req-e18c5e3b-4c40-42cc-ae9c-c9adcb157421 req-f46afa33-387f-44b4-99f1-2aae02a0e3b9 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Received event network-vif-unplugged-c06c63eb-935d-42e3-9672-1d18b6bca7ad external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:26:52 compute-0 nova_compute[117331]: 2025-10-09 16:26:52.540 2 DEBUG oslo_concurrency.lockutils [req-e18c5e3b-4c40-42cc-ae9c-c9adcb157421 req-f46afa33-387f-44b4-99f1-2aae02a0e3b9 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "da070580-b1f8-4571-b524-ce73d61665df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:26:52 compute-0 nova_compute[117331]: 2025-10-09 16:26:52.541 2 DEBUG oslo_concurrency.lockutils [req-e18c5e3b-4c40-42cc-ae9c-c9adcb157421 req-f46afa33-387f-44b4-99f1-2aae02a0e3b9 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "da070580-b1f8-4571-b524-ce73d61665df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:26:52 compute-0 nova_compute[117331]: 2025-10-09 16:26:52.541 2 DEBUG oslo_concurrency.lockutils [req-e18c5e3b-4c40-42cc-ae9c-c9adcb157421 req-f46afa33-387f-44b4-99f1-2aae02a0e3b9 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "da070580-b1f8-4571-b524-ce73d61665df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:26:52 compute-0 nova_compute[117331]: 2025-10-09 16:26:52.541 2 DEBUG nova.compute.manager [req-e18c5e3b-4c40-42cc-ae9c-c9adcb157421 req-f46afa33-387f-44b4-99f1-2aae02a0e3b9 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] No waiting events found dispatching network-vif-unplugged-c06c63eb-935d-42e3-9672-1d18b6bca7ad pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:26:52 compute-0 nova_compute[117331]: 2025-10-09 16:26:52.541 2 DEBUG nova.compute.manager [req-e18c5e3b-4c40-42cc-ae9c-c9adcb157421 req-f46afa33-387f-44b4-99f1-2aae02a0e3b9 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Received event network-vif-unplugged-c06c63eb-935d-42e3-9672-1d18b6bca7ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:26:52 compute-0 nova_compute[117331]: 2025-10-09 16:26:52.690 2 DEBUG nova.network.neutron [-] [instance: da070580-b1f8-4571-b524-ce73d61665df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:26:52 compute-0 nova_compute[117331]: 2025-10-09 16:26:52.764 2 DEBUG nova.compute.manager [req-da65de4a-b428-4f1b-be64-2c2847f8f85f req-23419a31-1f5e-46db-aeab-b65c757ff146 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: da070580-b1f8-4571-b524-ce73d61665df] Detach interface failed, port_id=c06c63eb-935d-42e3-9672-1d18b6bca7ad, reason: Instance da070580-b1f8-4571-b524-ce73d61665df could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 09 16:26:53 compute-0 nova_compute[117331]: 2025-10-09 16:26:53.199 2 INFO nova.compute.manager [-] [instance: da070580-b1f8-4571-b524-ce73d61665df] Took 1.53 seconds to deallocate network for instance.
Oct 09 16:26:53 compute-0 nova_compute[117331]: 2025-10-09 16:26:53.720 2 DEBUG oslo_concurrency.lockutils [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:26:53 compute-0 nova_compute[117331]: 2025-10-09 16:26:53.721 2 DEBUG oslo_concurrency.lockutils [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:26:53 compute-0 nova_compute[117331]: 2025-10-09 16:26:53.794 2 DEBUG nova.compute.provider_tree [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:26:54 compute-0 sshd-session[147756]: Failed password for root from 91.224.92.108 port 55742 ssh2
Oct 09 16:26:54 compute-0 nova_compute[117331]: 2025-10-09 16:26:54.302 2 DEBUG nova.scheduler.client.report [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:26:54 compute-0 nova_compute[117331]: 2025-10-09 16:26:54.815 2 DEBUG oslo_concurrency.lockutils [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.094s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:26:54 compute-0 nova_compute[117331]: 2025-10-09 16:26:54.832 2 INFO nova.scheduler.client.report [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Deleted allocations for instance da070580-b1f8-4571-b524-ce73d61665df
Oct 09 16:26:55 compute-0 nova_compute[117331]: 2025-10-09 16:26:55.869 2 DEBUG oslo_concurrency.lockutils [None req-1d8bb806-218c-4a8d-915f-ab2e53d15f03 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "da070580-b1f8-4571-b524-ce73d61665df" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.094s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:26:56 compute-0 nova_compute[117331]: 2025-10-09 16:26:56.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:56 compute-0 sshd-session[147756]: Received disconnect from 91.224.92.108 port 55742:11:  [preauth]
Oct 09 16:26:56 compute-0 sshd-session[147756]: Disconnected from authenticating user root 91.224.92.108 port 55742 [preauth]
Oct 09 16:26:56 compute-0 sshd-session[147756]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 09 16:26:56 compute-0 nova_compute[117331]: 2025-10-09 16:26:56.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:26:58 compute-0 podman[147899]: 2025-10-09 16:26:58.912626669 +0000 UTC m=+0.125933877 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, release=1755695350, vcs-type=git, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6)
Oct 09 16:26:58 compute-0 podman[147900]: 2025-10-09 16:26:58.945674282 +0000 UTC m=+0.153561849 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 09 16:26:59 compute-0 podman[127775]: time="2025-10-09T16:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:26:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:26:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3022 "" "Go-http-client/1.1"
Oct 09 16:27:00 compute-0 nova_compute[117331]: 2025-10-09 16:27:00.179 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "f156c348-19a2-41d9-bf57-79cea2a84e3c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:27:00 compute-0 nova_compute[117331]: 2025-10-09 16:27:00.180 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "f156c348-19a2-41d9-bf57-79cea2a84e3c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:27:00 compute-0 nova_compute[117331]: 2025-10-09 16:27:00.687 2 DEBUG nova.compute.manager [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:27:01 compute-0 nova_compute[117331]: 2025-10-09 16:27:01.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:01 compute-0 nova_compute[117331]: 2025-10-09 16:27:01.243 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:27:01 compute-0 nova_compute[117331]: 2025-10-09 16:27:01.244 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:27:01 compute-0 nova_compute[117331]: 2025-10-09 16:27:01.252 2 DEBUG nova.virt.hardware [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:27:01 compute-0 nova_compute[117331]: 2025-10-09 16:27:01.253 2 INFO nova.compute.claims [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:27:01 compute-0 openstack_network_exporter[129925]: ERROR   16:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:27:01 compute-0 openstack_network_exporter[129925]: ERROR   16:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:27:01 compute-0 openstack_network_exporter[129925]: ERROR   16:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:27:01 compute-0 openstack_network_exporter[129925]: ERROR   16:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:27:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:27:01 compute-0 openstack_network_exporter[129925]: ERROR   16:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:27:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:27:01 compute-0 nova_compute[117331]: 2025-10-09 16:27:01.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:02 compute-0 unix_chkpwd[147946]: password check failed for user (root)
Oct 09 16:27:02 compute-0 sshd-session[147943]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=78.128.112.74  user=root
Oct 09 16:27:02 compute-0 nova_compute[117331]: 2025-10-09 16:27:02.311 2 DEBUG nova.compute.provider_tree [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:27:02 compute-0 nova_compute[117331]: 2025-10-09 16:27:02.819 2 DEBUG nova.scheduler.client.report [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:27:03 compute-0 nova_compute[117331]: 2025-10-09 16:27:03.331 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.087s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:27:03 compute-0 nova_compute[117331]: 2025-10-09 16:27:03.332 2 DEBUG nova.compute.manager [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:27:03 compute-0 nova_compute[117331]: 2025-10-09 16:27:03.842 2 DEBUG nova.compute.manager [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:27:03 compute-0 nova_compute[117331]: 2025-10-09 16:27:03.843 2 DEBUG nova.network.neutron [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:27:03 compute-0 nova_compute[117331]: 2025-10-09 16:27:03.843 2 WARNING neutronclient.v2_0.client [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:27:03 compute-0 nova_compute[117331]: 2025-10-09 16:27:03.844 2 WARNING neutronclient.v2_0.client [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:27:04 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:04.277 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:27:04 compute-0 nova_compute[117331]: 2025-10-09 16:27:04.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:04 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:04.278 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:27:04 compute-0 nova_compute[117331]: 2025-10-09 16:27:04.351 2 INFO nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:27:04 compute-0 sshd-session[147943]: Failed password for root from 78.128.112.74 port 43174 ssh2
Oct 09 16:27:04 compute-0 nova_compute[117331]: 2025-10-09 16:27:04.857 2 DEBUG nova.compute.manager [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:27:04 compute-0 nova_compute[117331]: 2025-10-09 16:27:04.972 2 DEBUG nova.network.neutron [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Successfully created port: 9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:27:05 compute-0 nova_compute[117331]: 2025-10-09 16:27:05.878 2 DEBUG nova.compute.manager [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:27:05 compute-0 nova_compute[117331]: 2025-10-09 16:27:05.880 2 DEBUG nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:27:05 compute-0 nova_compute[117331]: 2025-10-09 16:27:05.881 2 INFO nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Creating image(s)
Oct 09 16:27:05 compute-0 nova_compute[117331]: 2025-10-09 16:27:05.881 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "/var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:27:05 compute-0 nova_compute[117331]: 2025-10-09 16:27:05.882 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "/var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:27:05 compute-0 nova_compute[117331]: 2025-10-09 16:27:05.883 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "/var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:27:05 compute-0 nova_compute[117331]: 2025-10-09 16:27:05.883 2 DEBUG oslo_utils.imageutils.format_inspector [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:27:05 compute-0 nova_compute[117331]: 2025-10-09 16:27:05.888 2 DEBUG oslo_utils.imageutils.format_inspector [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:27:05 compute-0 nova_compute[117331]: 2025-10-09 16:27:05.891 2 DEBUG oslo_concurrency.processutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:27:05 compute-0 nova_compute[117331]: 2025-10-09 16:27:05.959 2 DEBUG oslo_concurrency.processutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:27:05 compute-0 nova_compute[117331]: 2025-10-09 16:27:05.960 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:27:05 compute-0 nova_compute[117331]: 2025-10-09 16:27:05.961 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:27:05 compute-0 nova_compute[117331]: 2025-10-09 16:27:05.961 2 DEBUG oslo_utils.imageutils.format_inspector [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:27:05 compute-0 nova_compute[117331]: 2025-10-09 16:27:05.965 2 DEBUG oslo_utils.imageutils.format_inspector [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:27:05 compute-0 nova_compute[117331]: 2025-10-09 16:27:05.966 2 DEBUG oslo_concurrency.processutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.017 2 DEBUG oslo_concurrency.processutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.018 2 DEBUG oslo_concurrency.processutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.062 2 DEBUG oslo_concurrency.processutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.063 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.103s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.064 2 DEBUG oslo_concurrency.processutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.127 2 DEBUG oslo_concurrency.processutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.128 2 DEBUG nova.virt.disk.api [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Checking if we can resize image /var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.128 2 DEBUG oslo_concurrency.processutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.180 2 DEBUG oslo_concurrency.processutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.181 2 DEBUG nova.virt.disk.api [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Cannot resize image /var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.182 2 DEBUG nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.182 2 DEBUG nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Ensure instance console log exists: /var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.183 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.183 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.183 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.190 2 DEBUG nova.network.neutron [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Successfully updated port: 9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.233 2 DEBUG nova.compute.manager [req-cfc60a22-807d-45c7-88b5-8d4d82450eae req-a0fb1b1b-efca-452b-a8d2-22347b2f50ce ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Received event network-changed-9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.233 2 DEBUG nova.compute.manager [req-cfc60a22-807d-45c7-88b5-8d4d82450eae req-a0fb1b1b-efca-452b-a8d2-22347b2f50ce ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Refreshing instance network info cache due to event network-changed-9ce16b2c-f453-4ee6-ab7f-159c6386c5d3. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.233 2 DEBUG oslo_concurrency.lockutils [req-cfc60a22-807d-45c7-88b5-8d4d82450eae req-a0fb1b1b-efca-452b-a8d2-22347b2f50ce ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-f156c348-19a2-41d9-bf57-79cea2a84e3c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.234 2 DEBUG oslo_concurrency.lockutils [req-cfc60a22-807d-45c7-88b5-8d4d82450eae req-a0fb1b1b-efca-452b-a8d2-22347b2f50ce ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-f156c348-19a2-41d9-bf57-79cea2a84e3c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.234 2 DEBUG nova.network.neutron [req-cfc60a22-807d-45c7-88b5-8d4d82450eae req-a0fb1b1b-efca-452b-a8d2-22347b2f50ce ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Refreshing network info cache for port 9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:27:06 compute-0 sshd-session[147943]: Connection closed by authenticating user root 78.128.112.74 port 43174 [preauth]
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.696 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "refresh_cache-f156c348-19a2-41d9-bf57-79cea2a84e3c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.739 2 WARNING neutronclient.v2_0.client [req-cfc60a22-807d-45c7-88b5-8d4d82450eae req-a0fb1b1b-efca-452b-a8d2-22347b2f50ce ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:27:06 compute-0 nova_compute[117331]: 2025-10-09 16:27:06.981 2 DEBUG nova.network.neutron [req-cfc60a22-807d-45c7-88b5-8d4d82450eae req-a0fb1b1b-efca-452b-a8d2-22347b2f50ce ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:27:07 compute-0 nova_compute[117331]: 2025-10-09 16:27:07.115 2 DEBUG nova.network.neutron [req-cfc60a22-807d-45c7-88b5-8d4d82450eae req-a0fb1b1b-efca-452b-a8d2-22347b2f50ce ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:27:07 compute-0 nova_compute[117331]: 2025-10-09 16:27:07.621 2 DEBUG oslo_concurrency.lockutils [req-cfc60a22-807d-45c7-88b5-8d4d82450eae req-a0fb1b1b-efca-452b-a8d2-22347b2f50ce ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-f156c348-19a2-41d9-bf57-79cea2a84e3c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:27:07 compute-0 nova_compute[117331]: 2025-10-09 16:27:07.622 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquired lock "refresh_cache-f156c348-19a2-41d9-bf57-79cea2a84e3c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:27:07 compute-0 nova_compute[117331]: 2025-10-09 16:27:07.622 2 DEBUG nova.network.neutron [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:27:08 compute-0 nova_compute[117331]: 2025-10-09 16:27:08.635 2 DEBUG nova.network.neutron [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:27:08 compute-0 nova_compute[117331]: 2025-10-09 16:27:08.880 2 WARNING neutronclient.v2_0.client [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.082 2 DEBUG nova.network.neutron [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Updating instance_info_cache with network_info: [{"id": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "address": "fa:16:3e:ba:95:b3", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ce16b2c-f4", "ovs_interfaceid": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.589 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Releasing lock "refresh_cache-f156c348-19a2-41d9-bf57-79cea2a84e3c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.590 2 DEBUG nova.compute.manager [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Instance network_info: |[{"id": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "address": "fa:16:3e:ba:95:b3", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ce16b2c-f4", "ovs_interfaceid": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.593 2 DEBUG nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Start _get_guest_xml network_info=[{"id": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "address": "fa:16:3e:ba:95:b3", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ce16b2c-f4", "ovs_interfaceid": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.597 2 WARNING nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.598 2 DEBUG nova.virt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-1886277122', uuid='f156c348-19a2-41d9-bf57-79cea2a84e3c'), owner=OwnerMeta(userid='c2c0f7c2f15e4da6881dc393064b0e16', username='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332-project-admin', projectid='1a345bddd4804404a55948133ea8150f', projectname='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "address": "fa:16:3e:ba:95:b3", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ce16b2c-f4", "ovs_interfaceid": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760027229.5987637) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.603 2 DEBUG nova.virt.libvirt.host [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.604 2 DEBUG nova.virt.libvirt.host [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.607 2 DEBUG nova.virt.libvirt.host [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.607 2 DEBUG nova.virt.libvirt.host [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.608 2 DEBUG nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.608 2 DEBUG nova.virt.hardware [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.609 2 DEBUG nova.virt.hardware [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.609 2 DEBUG nova.virt.hardware [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.609 2 DEBUG nova.virt.hardware [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.610 2 DEBUG nova.virt.hardware [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.610 2 DEBUG nova.virt.hardware [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.610 2 DEBUG nova.virt.hardware [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.610 2 DEBUG nova.virt.hardware [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.611 2 DEBUG nova.virt.hardware [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.611 2 DEBUG nova.virt.hardware [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.611 2 DEBUG nova.virt.hardware [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.615 2 DEBUG nova.virt.libvirt.vif [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-1886277122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-188',id=18,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a345bddd4804404a55948133ea8150f',ramdisk_id='',reservation_id='r-ours3yn6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:27:04Z,user_data=None,user_id='c2c0f7c2f15e4da6881dc393064b0e16',uuid=f156c348-19a2-41d9-bf57-79cea2a84e3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "address": "fa:16:3e:ba:95:b3", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ce16b2c-f4", "ovs_interfaceid": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.616 2 DEBUG nova.network.os_vif_util [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Converting VIF {"id": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "address": "fa:16:3e:ba:95:b3", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ce16b2c-f4", "ovs_interfaceid": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.617 2 DEBUG nova.network.os_vif_util [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:95:b3,bridge_name='br-int',has_traffic_filtering=True,id=9ce16b2c-f453-4ee6-ab7f-159c6386c5d3,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ce16b2c-f4') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:27:09 compute-0 nova_compute[117331]: 2025-10-09 16:27:09.617 2 DEBUG nova.objects.instance [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lazy-loading 'pci_devices' on Instance uuid f156c348-19a2-41d9-bf57-79cea2a84e3c obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.129 2 DEBUG nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:27:10 compute-0 nova_compute[117331]:   <uuid>f156c348-19a2-41d9-bf57-79cea2a84e3c</uuid>
Oct 09 16:27:10 compute-0 nova_compute[117331]:   <name>instance-00000012</name>
Oct 09 16:27:10 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:27:10 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:27:10 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteNodeResourceConsolidationStrategy-server-1886277122</nova:name>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:27:09</nova:creationTime>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:27:10 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:27:10 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:27:10 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:27:10 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:27:10 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:27:10 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:27:10 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:27:10 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:27:10 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:27:10 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:27:10 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:27:10 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:27:10 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:27:10 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:27:10 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:27:10 compute-0 nova_compute[117331]:         <nova:user uuid="c2c0f7c2f15e4da6881dc393064b0e16">tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332-project-admin</nova:user>
Oct 09 16:27:10 compute-0 nova_compute[117331]:         <nova:project uuid="1a345bddd4804404a55948133ea8150f">tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332</nova:project>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:27:10 compute-0 nova_compute[117331]:         <nova:port uuid="9ce16b2c-f453-4ee6-ab7f-159c6386c5d3">
Oct 09 16:27:10 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:27:10 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:27:10 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <system>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <entry name="serial">f156c348-19a2-41d9-bf57-79cea2a84e3c</entry>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <entry name="uuid">f156c348-19a2-41d9-bf57-79cea2a84e3c</entry>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     </system>
Oct 09 16:27:10 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:27:10 compute-0 nova_compute[117331]:   <os>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:   </os>
Oct 09 16:27:10 compute-0 nova_compute[117331]:   <features>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:   </features>
Oct 09 16:27:10 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:27:10 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:27:10 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/disk"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/disk.config"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:ba:95:b3"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <target dev="tap9ce16b2c-f4"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/console.log" append="off"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <video>
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     </video>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:27:10 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:27:10 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:27:10 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:27:10 compute-0 nova_compute[117331]: </domain>
Oct 09 16:27:10 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.130 2 DEBUG nova.compute.manager [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Preparing to wait for external event network-vif-plugged-9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.131 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "f156c348-19a2-41d9-bf57-79cea2a84e3c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.132 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "f156c348-19a2-41d9-bf57-79cea2a84e3c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.132 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "f156c348-19a2-41d9-bf57-79cea2a84e3c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.133 2 DEBUG nova.virt.libvirt.vif [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-1886277122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-188',id=18,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a345bddd4804404a55948133ea8150f',ramdisk_id='',reservation_id='r-ours3yn6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:27:04Z,user_data=None,user_id='c2c0f7c2f15e4da6881dc393064b0e16',uuid=f156c348-19a2-41d9-bf57-79cea2a84e3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "address": "fa:16:3e:ba:95:b3", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ce16b2c-f4", "ovs_interfaceid": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.134 2 DEBUG nova.network.os_vif_util [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Converting VIF {"id": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "address": "fa:16:3e:ba:95:b3", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ce16b2c-f4", "ovs_interfaceid": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.135 2 DEBUG nova.network.os_vif_util [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:95:b3,bridge_name='br-int',has_traffic_filtering=True,id=9ce16b2c-f453-4ee6-ab7f-159c6386c5d3,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ce16b2c-f4') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.136 2 DEBUG os_vif [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:95:b3,bridge_name='br-int',has_traffic_filtering=True,id=9ce16b2c-f453-4ee6-ab7f-159c6386c5d3,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ce16b2c-f4') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.137 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.138 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'e3b06758-0f66-59fc-8e07-96ce6caa7f86', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.147 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ce16b2c-f4, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.148 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap9ce16b2c-f4, col_values=(('qos', UUID('819a41c3-ce7d-494b-bea6-f70c2e945dd1')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.148 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap9ce16b2c-f4, col_values=(('external_ids', {'iface-id': '9ce16b2c-f453-4ee6-ab7f-159c6386c5d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:95:b3', 'vm-uuid': 'f156c348-19a2-41d9-bf57-79cea2a84e3c'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:10 compute-0 NetworkManager[1028]: <info>  [1760027230.1517] manager: (tap9ce16b2c-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:10 compute-0 nova_compute[117331]: 2025-10-09 16:27:10.160 2 INFO os_vif [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:95:b3,bridge_name='br-int',has_traffic_filtering=True,id=9ce16b2c-f453-4ee6-ab7f-159c6386c5d3,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ce16b2c-f4')
Oct 09 16:27:11 compute-0 nova_compute[117331]: 2025-10-09 16:27:11.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:11 compute-0 nova_compute[117331]: 2025-10-09 16:27:11.718 2 DEBUG nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:27:11 compute-0 nova_compute[117331]: 2025-10-09 16:27:11.719 2 DEBUG nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:27:11 compute-0 nova_compute[117331]: 2025-10-09 16:27:11.719 2 DEBUG nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] No VIF found with MAC fa:16:3e:ba:95:b3, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:27:11 compute-0 nova_compute[117331]: 2025-10-09 16:27:11.719 2 INFO nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Using config drive
Oct 09 16:27:11 compute-0 podman[147966]: 2025-10-09 16:27:11.858567632 +0000 UTC m=+0.081565316 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251007, config_id=multipathd, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Oct 09 16:27:12 compute-0 nova_compute[117331]: 2025-10-09 16:27:12.228 2 WARNING neutronclient.v2_0.client [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:27:12 compute-0 nova_compute[117331]: 2025-10-09 16:27:12.400 2 INFO nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Creating config drive at /var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/disk.config
Oct 09 16:27:12 compute-0 nova_compute[117331]: 2025-10-09 16:27:12.405 2 DEBUG oslo_concurrency.processutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmp62el2ti2 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:27:12 compute-0 nova_compute[117331]: 2025-10-09 16:27:12.538 2 DEBUG oslo_concurrency.processutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmp62el2ti2" returned: 0 in 0.133s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:27:12 compute-0 kernel: tap9ce16b2c-f4: entered promiscuous mode
Oct 09 16:27:12 compute-0 NetworkManager[1028]: <info>  [1760027232.6043] manager: (tap9ce16b2c-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Oct 09 16:27:12 compute-0 ovn_controller[19752]: 2025-10-09T16:27:12Z|00157|binding|INFO|Claiming lport 9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 for this chassis.
Oct 09 16:27:12 compute-0 ovn_controller[19752]: 2025-10-09T16:27:12Z|00158|binding|INFO|9ce16b2c-f453-4ee6-ab7f-159c6386c5d3: Claiming fa:16:3e:ba:95:b3 10.100.0.8
Oct 09 16:27:12 compute-0 nova_compute[117331]: 2025-10-09 16:27:12.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.611 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:95:b3 10.100.0.8'], port_security=['fa:16:3e:ba:95:b3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f156c348-19a2-41d9-bf57-79cea2a84e3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18ef4241-0151-441c-abdc-42d4b3a21b30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a345bddd4804404a55948133ea8150f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9689e159-b15c-4e2f-9c7b-a0ccf3c3f578', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ed87cdf-61a6-4df0-ac74-37289a0bcd5f, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=9ce16b2c-f453-4ee6-ab7f-159c6386c5d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.612 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 in datapath 18ef4241-0151-441c-abdc-42d4b3a21b30 bound to our chassis
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.614 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18ef4241-0151-441c-abdc-42d4b3a21b30
Oct 09 16:27:12 compute-0 systemd-udevd[148002]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.631 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a0c1eb5a-e6c4-4bb9-bcb2-8264f900c3f6]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.631 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap18ef4241-01 in ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 09 16:27:12 compute-0 ovn_controller[19752]: 2025-10-09T16:27:12Z|00159|binding|INFO|Setting lport 9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 ovn-installed in OVS
Oct 09 16:27:12 compute-0 ovn_controller[19752]: 2025-10-09T16:27:12Z|00160|binding|INFO|Setting lport 9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 up in Southbound
Oct 09 16:27:12 compute-0 nova_compute[117331]: 2025-10-09 16:27:12.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.633 139687 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap18ef4241-00 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.633 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[76659e54-6257-4c1d-b15c-b29df870ef69]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.635 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[b8bf3ba5-7d97-4281-ae0e-88910939eac9]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:27:12 compute-0 nova_compute[117331]: 2025-10-09 16:27:12.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.649 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[31d4d415-1f6b-405d-9eca-74e39a32f888]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:27:12 compute-0 NetworkManager[1028]: <info>  [1760027232.6507] device (tap9ce16b2c-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:27:12 compute-0 NetworkManager[1028]: <info>  [1760027232.6518] device (tap9ce16b2c-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:27:12 compute-0 systemd-machined[77487]: New machine qemu-12-instance-00000012.
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.668 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[c666c836-e6b2-4234-84bd-6dfb0c693179]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:27:12 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-00000012.
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.698 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[8decf131-e214-4c65-83ef-1b51f71b9f9d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.703 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[88e3f57d-206e-4855-a151-43c522947d37]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:27:12 compute-0 NetworkManager[1028]: <info>  [1760027232.7055] manager: (tap18ef4241-00): new Veth device (/org/freedesktop/NetworkManager/Devices/64)
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.744 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[b8aa72d0-d0ea-4dd8-82d1-f7bea978682f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.747 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[3e8cc6bf-791b-4831-a513-304204f34e8b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:27:12 compute-0 NetworkManager[1028]: <info>  [1760027232.7710] device (tap18ef4241-00): carrier: link connected
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.777 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[c50314b5-ad98-407e-b838-526b286dbb9c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:27:12 compute-0 nova_compute[117331]: 2025-10-09 16:27:12.782 2 DEBUG nova.compute.manager [req-b0f74e83-b0b3-4767-a00e-3489e350e208 req-6d194e64-a717-474d-91ff-8d3076ac8be6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Received event network-vif-plugged-9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:27:12 compute-0 nova_compute[117331]: 2025-10-09 16:27:12.783 2 DEBUG oslo_concurrency.lockutils [req-b0f74e83-b0b3-4767-a00e-3489e350e208 req-6d194e64-a717-474d-91ff-8d3076ac8be6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "f156c348-19a2-41d9-bf57-79cea2a84e3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:27:12 compute-0 nova_compute[117331]: 2025-10-09 16:27:12.783 2 DEBUG oslo_concurrency.lockutils [req-b0f74e83-b0b3-4767-a00e-3489e350e208 req-6d194e64-a717-474d-91ff-8d3076ac8be6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "f156c348-19a2-41d9-bf57-79cea2a84e3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:27:12 compute-0 nova_compute[117331]: 2025-10-09 16:27:12.783 2 DEBUG oslo_concurrency.lockutils [req-b0f74e83-b0b3-4767-a00e-3489e350e208 req-6d194e64-a717-474d-91ff-8d3076ac8be6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "f156c348-19a2-41d9-bf57-79cea2a84e3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:27:12 compute-0 nova_compute[117331]: 2025-10-09 16:27:12.783 2 DEBUG nova.compute.manager [req-b0f74e83-b0b3-4767-a00e-3489e350e208 req-6d194e64-a717-474d-91ff-8d3076ac8be6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Processing event network-vif-plugged-9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.795 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[092b5b37-f685-41fd-adf1-314eed3808b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18ef4241-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:5e:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 217637, 'reachable_time': 15240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 148037, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.809 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a43ed797-a10e-4ea1-96e6-0172e30768c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:5e11'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 217637, 'tstamp': 217637}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 148038, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.825 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[abbfb3a6-e9b1-49a7-8010-761d5538bc22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18ef4241-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:5e:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 217637, 'reachable_time': 15240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 148039, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.855 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[7236e25d-f976-4424-b6a9-8ce7ad1ea7f4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.929 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ee5e64f3-425c-4d15-a630-fcf455536f23]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.930 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18ef4241-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.931 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.931 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18ef4241-00, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:27:12 compute-0 nova_compute[117331]: 2025-10-09 16:27:12.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:12 compute-0 kernel: tap18ef4241-00: entered promiscuous mode
Oct 09 16:27:12 compute-0 NetworkManager[1028]: <info>  [1760027232.9834] manager: (tap18ef4241-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Oct 09 16:27:12 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:12.984 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18ef4241-00, col_values=(('external_ids', {'iface-id': '2ea79927-f6b6-48ed-a992-4066429c8e5d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:27:12 compute-0 nova_compute[117331]: 2025-10-09 16:27:12.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:12 compute-0 ovn_controller[19752]: 2025-10-09T16:27:12Z|00161|binding|INFO|Releasing lport 2ea79927-f6b6-48ed-a992-4066429c8e5d from this chassis (sb_readonly=0)
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:13.006 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[9f16e7cc-a065-46ab-b8c3-3a76a6412422]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:13.007 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:13.007 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:13.007 28613 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 18ef4241-0151-441c-abdc-42d4b3a21b30 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:13.007 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:13.008 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[122062cc-e1aa-42b1-ba84-d2cc0fe3f3a6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:13.008 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:13.009 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[6d63f376-7dc8-48f6-970b-57981af7fcf1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:13.009 28613 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]: global
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     log         /dev/log local0 debug
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     log-tag     haproxy-metadata-proxy-18ef4241-0151-441c-abdc-42d4b3a21b30
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     user        root
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     group       root
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     maxconn     1024
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     pidfile     /var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     daemon
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]: defaults
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     log global
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     mode http
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     option httplog
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     option dontlognull
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     option http-server-close
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     option forwardfor
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     retries                 3
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     timeout http-request    30s
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     timeout connect         30s
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     timeout client          32s
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     timeout server          32s
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     timeout http-keep-alive 30s
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]: listen listener
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     bind 169.254.169.254:80
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:27:13 compute-0 nova_compute[117331]: 2025-10-09 16:27:12.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:     http-request add-header X-OVN-Network-ID 18ef4241-0151-441c-abdc-42d4b3a21b30
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 09 16:27:13 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:13.010 28613 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'env', 'PROCESS_TAG=haproxy-18ef4241-0151-441c-abdc-42d4b3a21b30', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/18ef4241-0151-441c-abdc-42d4b3a21b30.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 09 16:27:13 compute-0 nova_compute[117331]: 2025-10-09 16:27:13.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:13 compute-0 podman[148078]: 2025-10-09 16:27:13.436909046 +0000 UTC m=+0.054392458 container create 9b48415012573e8086a1e52cf3c9b14f3393836867721c199702295debe01f5c (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007)
Oct 09 16:27:13 compute-0 systemd[1]: Started libpod-conmon-9b48415012573e8086a1e52cf3c9b14f3393836867721c199702295debe01f5c.scope.
Oct 09 16:27:13 compute-0 podman[148078]: 2025-10-09 16:27:13.40630916 +0000 UTC m=+0.023792602 image pull 4e15c189a95c5dcd1203c76fe304de5c8f8616da9a258ca168cc54a517f49b87 38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 09 16:27:13 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e29f697265089460e0b4ba52a12c3dbc036e5fa63ffd678932ae0f59d3bd447/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 16:27:13 compute-0 podman[148078]: 2025-10-09 16:27:13.530587735 +0000 UTC m=+0.148071187 container init 9b48415012573e8086a1e52cf3c9b14f3393836867721c199702295debe01f5c (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, io.buildah.version=1.41.4)
Oct 09 16:27:13 compute-0 podman[148078]: 2025-10-09 16:27:13.536580093 +0000 UTC m=+0.154063525 container start 9b48415012573e8086a1e52cf3c9b14f3393836867721c199702295debe01f5c (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Oct 09 16:27:13 compute-0 neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30[148093]: [NOTICE]   (148097) : New worker (148099) forked
Oct 09 16:27:13 compute-0 neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30[148093]: [NOTICE]   (148097) : Loading success.
Oct 09 16:27:13 compute-0 nova_compute[117331]: 2025-10-09 16:27:13.695 2 DEBUG nova.compute.manager [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:27:13 compute-0 nova_compute[117331]: 2025-10-09 16:27:13.699 2 DEBUG nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:27:13 compute-0 nova_compute[117331]: 2025-10-09 16:27:13.703 2 INFO nova.virt.libvirt.driver [-] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Instance spawned successfully.
Oct 09 16:27:13 compute-0 nova_compute[117331]: 2025-10-09 16:27:13.704 2 DEBUG nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:27:14 compute-0 nova_compute[117331]: 2025-10-09 16:27:14.220 2 DEBUG nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:27:14 compute-0 nova_compute[117331]: 2025-10-09 16:27:14.220 2 DEBUG nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:27:14 compute-0 nova_compute[117331]: 2025-10-09 16:27:14.221 2 DEBUG nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:27:14 compute-0 nova_compute[117331]: 2025-10-09 16:27:14.222 2 DEBUG nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:27:14 compute-0 nova_compute[117331]: 2025-10-09 16:27:14.222 2 DEBUG nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:27:14 compute-0 nova_compute[117331]: 2025-10-09 16:27:14.223 2 DEBUG nova.virt.libvirt.driver [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:27:14 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:14.280 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:27:14 compute-0 nova_compute[117331]: 2025-10-09 16:27:14.736 2 INFO nova.compute.manager [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Took 8.86 seconds to spawn the instance on the hypervisor.
Oct 09 16:27:14 compute-0 nova_compute[117331]: 2025-10-09 16:27:14.737 2 DEBUG nova.compute.manager [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:27:14 compute-0 podman[148108]: 2025-10-09 16:27:14.835579547 +0000 UTC m=+0.056034630 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 09 16:27:14 compute-0 nova_compute[117331]: 2025-10-09 16:27:14.842 2 DEBUG nova.compute.manager [req-279cde75-70cb-4a02-a477-f9c1732b99ab req-e9a10dfb-2056-487a-9ad4-c8c841282369 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Received event network-vif-plugged-9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:27:14 compute-0 nova_compute[117331]: 2025-10-09 16:27:14.842 2 DEBUG oslo_concurrency.lockutils [req-279cde75-70cb-4a02-a477-f9c1732b99ab req-e9a10dfb-2056-487a-9ad4-c8c841282369 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "f156c348-19a2-41d9-bf57-79cea2a84e3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:27:14 compute-0 nova_compute[117331]: 2025-10-09 16:27:14.843 2 DEBUG oslo_concurrency.lockutils [req-279cde75-70cb-4a02-a477-f9c1732b99ab req-e9a10dfb-2056-487a-9ad4-c8c841282369 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "f156c348-19a2-41d9-bf57-79cea2a84e3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:27:14 compute-0 nova_compute[117331]: 2025-10-09 16:27:14.843 2 DEBUG oslo_concurrency.lockutils [req-279cde75-70cb-4a02-a477-f9c1732b99ab req-e9a10dfb-2056-487a-9ad4-c8c841282369 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "f156c348-19a2-41d9-bf57-79cea2a84e3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:27:14 compute-0 nova_compute[117331]: 2025-10-09 16:27:14.843 2 DEBUG nova.compute.manager [req-279cde75-70cb-4a02-a477-f9c1732b99ab req-e9a10dfb-2056-487a-9ad4-c8c841282369 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] No waiting events found dispatching network-vif-plugged-9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:27:14 compute-0 nova_compute[117331]: 2025-10-09 16:27:14.843 2 WARNING nova.compute.manager [req-279cde75-70cb-4a02-a477-f9c1732b99ab req-e9a10dfb-2056-487a-9ad4-c8c841282369 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Received unexpected event network-vif-plugged-9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 for instance with vm_state active and task_state None.
Oct 09 16:27:15 compute-0 nova_compute[117331]: 2025-10-09 16:27:15.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:15 compute-0 nova_compute[117331]: 2025-10-09 16:27:15.286 2 INFO nova.compute.manager [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Took 14.09 seconds to build instance.
Oct 09 16:27:15 compute-0 nova_compute[117331]: 2025-10-09 16:27:15.794 2 DEBUG oslo_concurrency.lockutils [None req-af428dbd-e996-411a-81c2-7c86ec3e213c c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "f156c348-19a2-41d9-bf57-79cea2a84e3c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.615s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:27:16 compute-0 nova_compute[117331]: 2025-10-09 16:27:16.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:19 compute-0 nova_compute[117331]: 2025-10-09 16:27:19.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:27:20 compute-0 nova_compute[117331]: 2025-10-09 16:27:20.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:20 compute-0 nova_compute[117331]: 2025-10-09 16:27:20.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:27:20 compute-0 podman[148131]: 2025-10-09 16:27:20.824437259 +0000 UTC m=+0.056317509 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007)
Oct 09 16:27:20 compute-0 podman[148132]: 2025-10-09 16:27:20.832719991 +0000 UTC m=+0.061618657 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251007, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=watcher_latest)
Oct 09 16:27:21 compute-0 nova_compute[117331]: 2025-10-09 16:27:21.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:27:21 compute-0 nova_compute[117331]: 2025-10-09 16:27:21.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:21 compute-0 nova_compute[117331]: 2025-10-09 16:27:21.819 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:27:21 compute-0 nova_compute[117331]: 2025-10-09 16:27:21.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:27:21 compute-0 nova_compute[117331]: 2025-10-09 16:27:21.821 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:27:21 compute-0 nova_compute[117331]: 2025-10-09 16:27:21.821 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:27:22 compute-0 nova_compute[117331]: 2025-10-09 16:27:22.885 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:27:22 compute-0 nova_compute[117331]: 2025-10-09 16:27:22.970 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:27:22 compute-0 nova_compute[117331]: 2025-10-09 16:27:22.971 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:27:23 compute-0 nova_compute[117331]: 2025-10-09 16:27:23.031 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:27:23 compute-0 nova_compute[117331]: 2025-10-09 16:27:23.175 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:27:23 compute-0 nova_compute[117331]: 2025-10-09 16:27:23.177 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:27:23 compute-0 nova_compute[117331]: 2025-10-09 16:27:23.197 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.021s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:27:23 compute-0 nova_compute[117331]: 2025-10-09 16:27:23.198 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6005MB free_disk=73.2566146850586GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:27:23 compute-0 nova_compute[117331]: 2025-10-09 16:27:23.198 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:27:23 compute-0 nova_compute[117331]: 2025-10-09 16:27:23.199 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:27:24 compute-0 nova_compute[117331]: 2025-10-09 16:27:24.248 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance f156c348-19a2-41d9-bf57-79cea2a84e3c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:27:24 compute-0 nova_compute[117331]: 2025-10-09 16:27:24.248 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:27:24 compute-0 nova_compute[117331]: 2025-10-09 16:27:24.248 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:27:23 up 36 min,  0 user,  load average: 0.74, 0.63, 0.44\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_1a345bddd4804404a55948133ea8150f': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:27:24 compute-0 nova_compute[117331]: 2025-10-09 16:27:24.283 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:27:24 compute-0 nova_compute[117331]: 2025-10-09 16:27:24.793 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:27:25 compute-0 nova_compute[117331]: 2025-10-09 16:27:25.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:25 compute-0 nova_compute[117331]: 2025-10-09 16:27:25.304 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:27:25 compute-0 nova_compute[117331]: 2025-10-09 16:27:25.305 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.106s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:27:26 compute-0 ovn_controller[19752]: 2025-10-09T16:27:26Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ba:95:b3 10.100.0.8
Oct 09 16:27:26 compute-0 ovn_controller[19752]: 2025-10-09T16:27:26Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:95:b3 10.100.0.8
Oct 09 16:27:26 compute-0 nova_compute[117331]: 2025-10-09 16:27:26.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:27 compute-0 nova_compute[117331]: 2025-10-09 16:27:27.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:27:27 compute-0 nova_compute[117331]: 2025-10-09 16:27:27.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:27:27 compute-0 nova_compute[117331]: 2025-10-09 16:27:27.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:27:27 compute-0 nova_compute[117331]: 2025-10-09 16:27:27.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:27:27 compute-0 nova_compute[117331]: 2025-10-09 16:27:27.308 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:27:27 compute-0 nova_compute[117331]: 2025-10-09 16:27:27.308 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:27:28 compute-0 nova_compute[117331]: 2025-10-09 16:27:28.816 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:27:29 compute-0 podman[127775]: time="2025-10-09T16:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:27:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:27:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3493 "" "Go-http-client/1.1"
Oct 09 16:27:29 compute-0 podman[148191]: 2025-10-09 16:27:29.84425557 +0000 UTC m=+0.062537655 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, managed_by=edpm_ansible)
Oct 09 16:27:29 compute-0 podman[148192]: 2025-10-09 16:27:29.879452852 +0000 UTC m=+0.095890329 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Oct 09 16:27:30 compute-0 nova_compute[117331]: 2025-10-09 16:27:30.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:31 compute-0 nova_compute[117331]: 2025-10-09 16:27:31.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:27:31 compute-0 nova_compute[117331]: 2025-10-09 16:27:31.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 09 16:27:31 compute-0 openstack_network_exporter[129925]: ERROR   16:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:27:31 compute-0 openstack_network_exporter[129925]: ERROR   16:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:27:31 compute-0 openstack_network_exporter[129925]: ERROR   16:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:27:31 compute-0 openstack_network_exporter[129925]: ERROR   16:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:27:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:27:31 compute-0 openstack_network_exporter[129925]: ERROR   16:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:27:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:27:31 compute-0 nova_compute[117331]: 2025-10-09 16:27:31.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:31 compute-0 nova_compute[117331]: 2025-10-09 16:27:31.814 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 09 16:27:35 compute-0 nova_compute[117331]: 2025-10-09 16:27:35.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:35.317 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:27:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:35.318 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:27:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:35.318 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:27:36 compute-0 nova_compute[117331]: 2025-10-09 16:27:36.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:27:36 compute-0 nova_compute[117331]: 2025-10-09 16:27:36.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 09 16:27:36 compute-0 nova_compute[117331]: 2025-10-09 16:27:36.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:40 compute-0 nova_compute[117331]: 2025-10-09 16:27:40.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:41 compute-0 nova_compute[117331]: 2025-10-09 16:27:41.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:42 compute-0 podman[148239]: 2025-10-09 16:27:42.827857905 +0000 UTC m=+0.060825652 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_managed=true)
Oct 09 16:27:44 compute-0 nova_compute[117331]: 2025-10-09 16:27:44.990 2 DEBUG nova.virt.libvirt.driver [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Creating tmpfile /var/lib/nova/instances/tmpuf1ow7nd to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Oct 09 16:27:44 compute-0 nova_compute[117331]: 2025-10-09 16:27:44.991 2 WARNING neutronclient.v2_0.client [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:27:44 compute-0 nova_compute[117331]: 2025-10-09 16:27:44.997 2 DEBUG nova.compute.manager [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpuf1ow7nd',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Oct 09 16:27:45 compute-0 nova_compute[117331]: 2025-10-09 16:27:45.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:45 compute-0 podman[148259]: 2025-10-09 16:27:45.867961252 +0000 UTC m=+0.093856235 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 09 16:27:46 compute-0 nova_compute[117331]: 2025-10-09 16:27:46.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:47 compute-0 nova_compute[117331]: 2025-10-09 16:27:47.043 2 WARNING neutronclient.v2_0.client [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:27:50 compute-0 nova_compute[117331]: 2025-10-09 16:27:50.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:51 compute-0 nova_compute[117331]: 2025-10-09 16:27:51.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:51 compute-0 podman[148283]: 2025-10-09 16:27:51.857656568 +0000 UTC m=+0.076313640 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 09 16:27:51 compute-0 podman[148284]: 2025-10-09 16:27:51.862078039 +0000 UTC m=+0.076555509 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 09 16:27:52 compute-0 nova_compute[117331]: 2025-10-09 16:27:52.696 2 DEBUG nova.compute.manager [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpuf1ow7nd',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='b2337e1c-84ef-4aa5-9a78-05dee2b3b853',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Oct 09 16:27:53 compute-0 nova_compute[117331]: 2025-10-09 16:27:53.710 2 DEBUG oslo_concurrency.lockutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-b2337e1c-84ef-4aa5-9a78-05dee2b3b853" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:27:53 compute-0 nova_compute[117331]: 2025-10-09 16:27:53.711 2 DEBUG oslo_concurrency.lockutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-b2337e1c-84ef-4aa5-9a78-05dee2b3b853" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:27:53 compute-0 nova_compute[117331]: 2025-10-09 16:27:53.711 2 DEBUG nova.network.neutron [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:27:54 compute-0 nova_compute[117331]: 2025-10-09 16:27:54.220 2 WARNING neutronclient.v2_0.client [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:27:54 compute-0 nova_compute[117331]: 2025-10-09 16:27:54.983 2 WARNING neutronclient.v2_0.client [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:27:55 compute-0 nova_compute[117331]: 2025-10-09 16:27:55.166 2 DEBUG nova.network.neutron [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Updating instance_info_cache with network_info: [{"id": "86ba251b-c492-47f6-9c92-65be385d4bab", "address": "fa:16:3e:92:c5:bc", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86ba251b-c4", "ovs_interfaceid": "86ba251b-c492-47f6-9c92-65be385d4bab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:27:55 compute-0 nova_compute[117331]: 2025-10-09 16:27:55.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:55 compute-0 nova_compute[117331]: 2025-10-09 16:27:55.677 2 DEBUG oslo_concurrency.lockutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-b2337e1c-84ef-4aa5-9a78-05dee2b3b853" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:27:55 compute-0 nova_compute[117331]: 2025-10-09 16:27:55.695 2 DEBUG nova.virt.libvirt.driver [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpuf1ow7nd',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='b2337e1c-84ef-4aa5-9a78-05dee2b3b853',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Oct 09 16:27:55 compute-0 nova_compute[117331]: 2025-10-09 16:27:55.696 2 DEBUG nova.virt.libvirt.driver [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Creating instance directory: /var/lib/nova/instances/b2337e1c-84ef-4aa5-9a78-05dee2b3b853 pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Oct 09 16:27:55 compute-0 nova_compute[117331]: 2025-10-09 16:27:55.697 2 DEBUG nova.virt.libvirt.driver [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Creating disk.info with the contents: {'/var/lib/nova/instances/b2337e1c-84ef-4aa5-9a78-05dee2b3b853/disk': 'qcow2', '/var/lib/nova/instances/b2337e1c-84ef-4aa5-9a78-05dee2b3b853/disk.config': 'raw'} pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11764
Oct 09 16:27:55 compute-0 nova_compute[117331]: 2025-10-09 16:27:55.698 2 DEBUG nova.virt.libvirt.driver [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11774
Oct 09 16:27:55 compute-0 nova_compute[117331]: 2025-10-09 16:27:55.698 2 DEBUG nova.objects.instance [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b2337e1c-84ef-4aa5-9a78-05dee2b3b853 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.204 2 DEBUG oslo_utils.imageutils.format_inspector [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.209 2 DEBUG oslo_utils.imageutils.format_inspector [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.210 2 DEBUG oslo_concurrency.processutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.300 2 DEBUG oslo_concurrency.processutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.301 2 DEBUG oslo_concurrency.lockutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.301 2 DEBUG oslo_concurrency.lockutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.302 2 DEBUG oslo_utils.imageutils.format_inspector [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.306 2 DEBUG oslo_utils.imageutils.format_inspector [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.306 2 DEBUG oslo_concurrency.processutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.371 2 DEBUG oslo_concurrency.processutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.373 2 DEBUG oslo_concurrency.processutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/b2337e1c-84ef-4aa5-9a78-05dee2b3b853/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.418 2 DEBUG oslo_concurrency.processutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/b2337e1c-84ef-4aa5-9a78-05dee2b3b853/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.419 2 DEBUG oslo_concurrency.lockutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.118s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.420 2 DEBUG oslo_concurrency.processutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.506 2 DEBUG oslo_concurrency.processutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.508 2 DEBUG nova.virt.disk.api [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Checking if we can resize image /var/lib/nova/instances/b2337e1c-84ef-4aa5-9a78-05dee2b3b853/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.509 2 DEBUG oslo_concurrency.processutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b2337e1c-84ef-4aa5-9a78-05dee2b3b853/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.577 2 DEBUG oslo_concurrency.processutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b2337e1c-84ef-4aa5-9a78-05dee2b3b853/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.578 2 DEBUG nova.virt.disk.api [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Cannot resize image /var/lib/nova/instances/b2337e1c-84ef-4aa5-9a78-05dee2b3b853/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.578 2 DEBUG nova.objects.instance [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'migration_context' on Instance uuid b2337e1c-84ef-4aa5-9a78-05dee2b3b853 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:27:56 compute-0 nova_compute[117331]: 2025-10-09 16:27:56.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.084 2 DEBUG nova.objects.base [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Object Instance<b2337e1c-84ef-4aa5-9a78-05dee2b3b853> lazy-loaded attributes: trusted_certs,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.085 2 DEBUG oslo_concurrency.processutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/b2337e1c-84ef-4aa5-9a78-05dee2b3b853/disk.config 497664 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.108 2 DEBUG oslo_concurrency.processutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/b2337e1c-84ef-4aa5-9a78-05dee2b3b853/disk.config 497664" returned: 0 in 0.023s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.109 2 DEBUG nova.virt.libvirt.driver [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.110 2 DEBUG nova.virt.libvirt.vif [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-09T16:27:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-2068562316',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-206',id=19,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:27:33Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1a345bddd4804404a55948133ea8150f',ramdisk_id='',reservation_id='r-9xxx25dz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:27:33Z,user_data=None,user_id='c2c0f7c2f15e4da6881dc393064b0e16',uuid=b2337e1c-84ef-4aa5-9a78-05dee2b3b853,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86ba251b-c492-47f6-9c92-65be385d4bab", "address": "fa:16:3e:92:c5:bc", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap86ba251b-c4", "ovs_interfaceid": "86ba251b-c492-47f6-9c92-65be385d4bab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.111 2 DEBUG nova.network.os_vif_util [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "86ba251b-c492-47f6-9c92-65be385d4bab", "address": "fa:16:3e:92:c5:bc", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap86ba251b-c4", "ovs_interfaceid": "86ba251b-c492-47f6-9c92-65be385d4bab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.111 2 DEBUG nova.network.os_vif_util [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:92:c5:bc,bridge_name='br-int',has_traffic_filtering=True,id=86ba251b-c492-47f6-9c92-65be385d4bab,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86ba251b-c4') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.112 2 DEBUG os_vif [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:92:c5:bc,bridge_name='br-int',has_traffic_filtering=True,id=86ba251b-c492-47f6-9c92-65be385d4bab,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86ba251b-c4') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.113 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.113 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.114 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '2cebd7ea-aaa1-5df7-a9a3-0a8843443a27', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.118 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86ba251b-c4, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.119 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap86ba251b-c4, col_values=(('qos', UUID('6866779d-ea26-4fe4-a0fa-2a2441d109f9')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.119 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap86ba251b-c4, col_values=(('external_ids', {'iface-id': '86ba251b-c492-47f6-9c92-65be385d4bab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:92:c5:bc', 'vm-uuid': 'b2337e1c-84ef-4aa5-9a78-05dee2b3b853'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:57 compute-0 NetworkManager[1028]: <info>  [1760027277.1210] manager: (tap86ba251b-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.129 2 INFO os_vif [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:92:c5:bc,bridge_name='br-int',has_traffic_filtering=True,id=86ba251b-c492-47f6-9c92-65be385d4bab,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86ba251b-c4')
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.129 2 DEBUG nova.virt.libvirt.driver [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.130 2 DEBUG nova.compute.manager [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpuf1ow7nd',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='b2337e1c-84ef-4aa5-9a78-05dee2b3b853',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.130 2 WARNING neutronclient.v2_0.client [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.216 2 WARNING neutronclient.v2_0.client [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:27:57 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:57.514 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:27:57 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:27:57.515 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:27:57 compute-0 nova_compute[117331]: 2025-10-09 16:27:57.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:27:58 compute-0 nova_compute[117331]: 2025-10-09 16:27:58.039 2 DEBUG nova.network.neutron [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Port 86ba251b-c492-47f6-9c92-65be385d4bab updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Oct 09 16:27:58 compute-0 nova_compute[117331]: 2025-10-09 16:27:58.059 2 DEBUG nova.compute.manager [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpuf1ow7nd',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='b2337e1c-84ef-4aa5-9a78-05dee2b3b853',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Oct 09 16:27:59 compute-0 podman[127775]: time="2025-10-09T16:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:27:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:27:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3482 "" "Go-http-client/1.1"
Oct 09 16:28:00 compute-0 podman[148342]: 2025-10-09 16:28:00.82335249 +0000 UTC m=+0.056525686 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, managed_by=edpm_ansible, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, com.redhat.component=ubi9-minimal-container, architecture=x86_64, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc.)
Oct 09 16:28:00 compute-0 podman[148343]: 2025-10-09 16:28:00.888455814 +0000 UTC m=+0.108805346 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 16:28:01 compute-0 kernel: tap86ba251b-c4: entered promiscuous mode
Oct 09 16:28:01 compute-0 NetworkManager[1028]: <info>  [1760027281.3562] manager: (tap86ba251b-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Oct 09 16:28:01 compute-0 nova_compute[117331]: 2025-10-09 16:28:01.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:01 compute-0 ovn_controller[19752]: 2025-10-09T16:28:01Z|00162|binding|INFO|Claiming lport 86ba251b-c492-47f6-9c92-65be385d4bab for this additional chassis.
Oct 09 16:28:01 compute-0 ovn_controller[19752]: 2025-10-09T16:28:01Z|00163|binding|INFO|86ba251b-c492-47f6-9c92-65be385d4bab: Claiming fa:16:3e:92:c5:bc 10.100.0.4
Oct 09 16:28:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:01.367 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:c5:bc 10.100.0.4'], port_security=['fa:16:3e:92:c5:bc 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b2337e1c-84ef-4aa5-9a78-05dee2b3b853', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18ef4241-0151-441c-abdc-42d4b3a21b30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a345bddd4804404a55948133ea8150f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '9689e159-b15c-4e2f-9c7b-a0ccf3c3f578', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ed87cdf-61a6-4df0-ac74-37289a0bcd5f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=86ba251b-c492-47f6-9c92-65be385d4bab) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:28:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:01.368 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 86ba251b-c492-47f6-9c92-65be385d4bab in datapath 18ef4241-0151-441c-abdc-42d4b3a21b30 unbound from our chassis
Oct 09 16:28:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:01.369 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18ef4241-0151-441c-abdc-42d4b3a21b30
Oct 09 16:28:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:01.390 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a4377b6f-cf71-4f7e-8c87-0c650f28ecec]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:01 compute-0 ovn_controller[19752]: 2025-10-09T16:28:01Z|00164|binding|INFO|Setting lport 86ba251b-c492-47f6-9c92-65be385d4bab ovn-installed in OVS
Oct 09 16:28:01 compute-0 nova_compute[117331]: 2025-10-09 16:28:01.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:01 compute-0 nova_compute[117331]: 2025-10-09 16:28:01.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:01 compute-0 systemd-udevd[148403]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:28:01 compute-0 systemd-machined[77487]: New machine qemu-13-instance-00000013.
Oct 09 16:28:01 compute-0 openstack_network_exporter[129925]: ERROR   16:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:28:01 compute-0 openstack_network_exporter[129925]: ERROR   16:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:28:01 compute-0 openstack_network_exporter[129925]: ERROR   16:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:28:01 compute-0 openstack_network_exporter[129925]: ERROR   16:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:28:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:28:01 compute-0 openstack_network_exporter[129925]: ERROR   16:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:28:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:28:01 compute-0 NetworkManager[1028]: <info>  [1760027281.4230] device (tap86ba251b-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:28:01 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-00000013.
Oct 09 16:28:01 compute-0 NetworkManager[1028]: <info>  [1760027281.4258] device (tap86ba251b-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:28:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:01.435 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[ea3d3a38-b8e9-4cb1-9d2d-ae64c96ee99f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:01.439 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[c9ac5745-96d6-4af6-af62-5e3a78d1a302]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:01.474 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[7d61c6e7-327c-4656-8800-a07eed9a5955]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:01.495 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[7db8819a-16aa-409e-8922-5df662960b26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18ef4241-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:5e:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 217637, 'reachable_time': 15240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 148413, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:01.513 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[f9675054-2aa3-4eca-a94a-8aa4359a7205]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap18ef4241-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 217648, 'tstamp': 217648}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 148416, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap18ef4241-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 217652, 'tstamp': 217652}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 148416, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:01.515 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18ef4241-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:28:01 compute-0 nova_compute[117331]: 2025-10-09 16:28:01.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:01 compute-0 nova_compute[117331]: 2025-10-09 16:28:01.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:01.518 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18ef4241-00, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:28:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:01.518 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:28:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:01.519 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18ef4241-00, col_values=(('external_ids', {'iface-id': '2ea79927-f6b6-48ed-a992-4066429c8e5d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:28:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:01.519 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:28:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:01.520 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8b4087-fec2-4226-a66c-397ca363e3ed]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-18ef4241-0151-441c-abdc-42d4b3a21b30\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 18ef4241-0151-441c-abdc-42d4b3a21b30\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:01 compute-0 nova_compute[117331]: 2025-10-09 16:28:01.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:02 compute-0 nova_compute[117331]: 2025-10-09 16:28:02.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:04 compute-0 ovn_controller[19752]: 2025-10-09T16:28:04Z|00165|binding|INFO|Claiming lport 86ba251b-c492-47f6-9c92-65be385d4bab for this chassis.
Oct 09 16:28:04 compute-0 ovn_controller[19752]: 2025-10-09T16:28:04Z|00166|binding|INFO|86ba251b-c492-47f6-9c92-65be385d4bab: Claiming fa:16:3e:92:c5:bc 10.100.0.4
Oct 09 16:28:04 compute-0 ovn_controller[19752]: 2025-10-09T16:28:04Z|00167|binding|INFO|Setting lport 86ba251b-c492-47f6-9c92-65be385d4bab up in Southbound
Oct 09 16:28:05 compute-0 nova_compute[117331]: 2025-10-09 16:28:05.829 2 INFO nova.compute.manager [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Post operation of migration started
Oct 09 16:28:05 compute-0 nova_compute[117331]: 2025-10-09 16:28:05.831 2 WARNING neutronclient.v2_0.client [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:28:05 compute-0 nova_compute[117331]: 2025-10-09 16:28:05.919 2 WARNING neutronclient.v2_0.client [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:28:05 compute-0 nova_compute[117331]: 2025-10-09 16:28:05.921 2 WARNING neutronclient.v2_0.client [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:28:06 compute-0 nova_compute[117331]: 2025-10-09 16:28:06.287 2 DEBUG oslo_concurrency.lockutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-b2337e1c-84ef-4aa5-9a78-05dee2b3b853" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:28:06 compute-0 nova_compute[117331]: 2025-10-09 16:28:06.288 2 DEBUG oslo_concurrency.lockutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-b2337e1c-84ef-4aa5-9a78-05dee2b3b853" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:28:06 compute-0 nova_compute[117331]: 2025-10-09 16:28:06.288 2 DEBUG nova.network.neutron [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:28:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:06.517 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:28:06 compute-0 nova_compute[117331]: 2025-10-09 16:28:06.794 2 WARNING neutronclient.v2_0.client [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:28:06 compute-0 nova_compute[117331]: 2025-10-09 16:28:06.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:07 compute-0 nova_compute[117331]: 2025-10-09 16:28:07.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:07 compute-0 nova_compute[117331]: 2025-10-09 16:28:07.811 2 WARNING neutronclient.v2_0.client [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:28:08 compute-0 nova_compute[117331]: 2025-10-09 16:28:08.024 2 DEBUG nova.network.neutron [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Updating instance_info_cache with network_info: [{"id": "86ba251b-c492-47f6-9c92-65be385d4bab", "address": "fa:16:3e:92:c5:bc", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86ba251b-c4", "ovs_interfaceid": "86ba251b-c492-47f6-9c92-65be385d4bab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:28:08 compute-0 nova_compute[117331]: 2025-10-09 16:28:08.530 2 DEBUG oslo_concurrency.lockutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-b2337e1c-84ef-4aa5-9a78-05dee2b3b853" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:28:09 compute-0 nova_compute[117331]: 2025-10-09 16:28:09.050 2 DEBUG oslo_concurrency.lockutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:28:09 compute-0 nova_compute[117331]: 2025-10-09 16:28:09.050 2 DEBUG oslo_concurrency.lockutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:28:09 compute-0 nova_compute[117331]: 2025-10-09 16:28:09.051 2 DEBUG oslo_concurrency.lockutils [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:28:09 compute-0 nova_compute[117331]: 2025-10-09 16:28:09.055 2 INFO nova.virt.libvirt.driver [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 09 16:28:09 compute-0 virtqemud[117629]: Domain id=13 name='instance-00000013' uuid=b2337e1c-84ef-4aa5-9a78-05dee2b3b853 is tainted: custom-monitor
Oct 09 16:28:10 compute-0 nova_compute[117331]: 2025-10-09 16:28:10.062 2 INFO nova.virt.libvirt.driver [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 09 16:28:11 compute-0 nova_compute[117331]: 2025-10-09 16:28:11.068 2 INFO nova.virt.libvirt.driver [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 09 16:28:11 compute-0 nova_compute[117331]: 2025-10-09 16:28:11.073 2 DEBUG nova.compute.manager [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:28:11 compute-0 nova_compute[117331]: 2025-10-09 16:28:11.588 2 DEBUG nova.objects.instance [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 09 16:28:11 compute-0 nova_compute[117331]: 2025-10-09 16:28:11.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:12 compute-0 nova_compute[117331]: 2025-10-09 16:28:12.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:12 compute-0 nova_compute[117331]: 2025-10-09 16:28:12.614 2 WARNING neutronclient.v2_0.client [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:28:13 compute-0 nova_compute[117331]: 2025-10-09 16:28:13.023 2 WARNING neutronclient.v2_0.client [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:28:13 compute-0 nova_compute[117331]: 2025-10-09 16:28:13.024 2 WARNING neutronclient.v2_0.client [None req-9826dbc4-f4ee-4ab0-b5aa-c4d678262d8a 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:28:13 compute-0 podman[148440]: 2025-10-09 16:28:13.835347298 +0000 UTC m=+0.064538519 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, container_name=multipathd, tcib_build_tag=watcher_latest, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 09 16:28:14 compute-0 nova_compute[117331]: 2025-10-09 16:28:14.929 2 DEBUG oslo_concurrency.lockutils [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:28:14 compute-0 nova_compute[117331]: 2025-10-09 16:28:14.930 2 DEBUG oslo_concurrency.lockutils [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:28:14 compute-0 nova_compute[117331]: 2025-10-09 16:28:14.930 2 DEBUG oslo_concurrency.lockutils [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:28:14 compute-0 nova_compute[117331]: 2025-10-09 16:28:14.931 2 DEBUG oslo_concurrency.lockutils [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:28:14 compute-0 nova_compute[117331]: 2025-10-09 16:28:14.932 2 DEBUG oslo_concurrency.lockutils [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:28:14 compute-0 nova_compute[117331]: 2025-10-09 16:28:14.945 2 INFO nova.compute.manager [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Terminating instance
Oct 09 16:28:15 compute-0 nova_compute[117331]: 2025-10-09 16:28:15.467 2 DEBUG nova.compute.manager [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 09 16:28:15 compute-0 kernel: tap86ba251b-c4 (unregistering): left promiscuous mode
Oct 09 16:28:15 compute-0 NetworkManager[1028]: <info>  [1760027295.5072] device (tap86ba251b-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:28:15 compute-0 ovn_controller[19752]: 2025-10-09T16:28:15Z|00168|binding|INFO|Releasing lport 86ba251b-c492-47f6-9c92-65be385d4bab from this chassis (sb_readonly=0)
Oct 09 16:28:15 compute-0 ovn_controller[19752]: 2025-10-09T16:28:15Z|00169|binding|INFO|Setting lport 86ba251b-c492-47f6-9c92-65be385d4bab down in Southbound
Oct 09 16:28:15 compute-0 nova_compute[117331]: 2025-10-09 16:28:15.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:15 compute-0 ovn_controller[19752]: 2025-10-09T16:28:15Z|00170|binding|INFO|Removing iface tap86ba251b-c4 ovn-installed in OVS
Oct 09 16:28:15 compute-0 nova_compute[117331]: 2025-10-09 16:28:15.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.544 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:c5:bc 10.100.0.4'], port_security=['fa:16:3e:92:c5:bc 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b2337e1c-84ef-4aa5-9a78-05dee2b3b853', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18ef4241-0151-441c-abdc-42d4b3a21b30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a345bddd4804404a55948133ea8150f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '9689e159-b15c-4e2f-9c7b-a0ccf3c3f578', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ed87cdf-61a6-4df0-ac74-37289a0bcd5f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=86ba251b-c492-47f6-9c92-65be385d4bab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.544 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 86ba251b-c492-47f6-9c92-65be385d4bab in datapath 18ef4241-0151-441c-abdc-42d4b3a21b30 unbound from our chassis
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.546 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18ef4241-0151-441c-abdc-42d4b3a21b30
Oct 09 16:28:15 compute-0 nova_compute[117331]: 2025-10-09 16:28:15.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.574 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[633a7279-2e50-4f29-a1ea-14a7c625c749]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:15 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000013.scope: Deactivated successfully.
Oct 09 16:28:15 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000013.scope: Consumed 2.503s CPU time.
Oct 09 16:28:15 compute-0 systemd-machined[77487]: Machine qemu-13-instance-00000013 terminated.
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.615 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[2214d2c8-a88a-4ddf-ac54-a5673454d999]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.618 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[018bd426-ef16-4ad3-b378-c046e1bba4ed]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.647 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[abe45137-0839-4a01-a390-682a0be22377]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.664 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d44e05ea-674b-4706-9b31-744880ae6e78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18ef4241-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:5e:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 217637, 'reachable_time': 15240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 148471, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.679 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ee5a23-9dbf-4b31-955a-8d09270141a7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap18ef4241-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 217648, 'tstamp': 217648}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 148472, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap18ef4241-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 217652, 'tstamp': 217652}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 148472, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.680 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18ef4241-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:28:15 compute-0 kernel: tap86ba251b-c4: entered promiscuous mode
Oct 09 16:28:15 compute-0 nova_compute[117331]: 2025-10-09 16:28:15.715 2 DEBUG nova.compute.manager [req-862ee3f6-bbb1-43f7-b68b-52816701a52a req-7d86ceba-f8bd-4ff9-b654-78c6f6f0b35b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Received event network-vif-unplugged-86ba251b-c492-47f6-9c92-65be385d4bab external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:28:15 compute-0 nova_compute[117331]: 2025-10-09 16:28:15.715 2 DEBUG oslo_concurrency.lockutils [req-862ee3f6-bbb1-43f7-b68b-52816701a52a req-7d86ceba-f8bd-4ff9-b654-78c6f6f0b35b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:28:15 compute-0 nova_compute[117331]: 2025-10-09 16:28:15.716 2 DEBUG oslo_concurrency.lockutils [req-862ee3f6-bbb1-43f7-b68b-52816701a52a req-7d86ceba-f8bd-4ff9-b654-78c6f6f0b35b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:28:15 compute-0 nova_compute[117331]: 2025-10-09 16:28:15.716 2 DEBUG oslo_concurrency.lockutils [req-862ee3f6-bbb1-43f7-b68b-52816701a52a req-7d86ceba-f8bd-4ff9-b654-78c6f6f0b35b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:28:15 compute-0 nova_compute[117331]: 2025-10-09 16:28:15.716 2 DEBUG nova.compute.manager [req-862ee3f6-bbb1-43f7-b68b-52816701a52a req-7d86ceba-f8bd-4ff9-b654-78c6f6f0b35b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] No waiting events found dispatching network-vif-unplugged-86ba251b-c492-47f6-9c92-65be385d4bab pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:28:15 compute-0 nova_compute[117331]: 2025-10-09 16:28:15.716 2 DEBUG nova.compute.manager [req-862ee3f6-bbb1-43f7-b68b-52816701a52a req-7d86ceba-f8bd-4ff9-b654-78c6f6f0b35b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Received event network-vif-unplugged-86ba251b-c492-47f6-9c92-65be385d4bab for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:28:15 compute-0 nova_compute[117331]: 2025-10-09 16:28:15.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:15 compute-0 kernel: tap86ba251b-c4 (unregistering): left promiscuous mode
Oct 09 16:28:15 compute-0 ovn_controller[19752]: 2025-10-09T16:28:15Z|00171|binding|INFO|Claiming lport 86ba251b-c492-47f6-9c92-65be385d4bab for this chassis.
Oct 09 16:28:15 compute-0 ovn_controller[19752]: 2025-10-09T16:28:15Z|00172|binding|INFO|86ba251b-c492-47f6-9c92-65be385d4bab: Claiming fa:16:3e:92:c5:bc 10.100.0.4
Oct 09 16:28:15 compute-0 nova_compute[117331]: 2025-10-09 16:28:15.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.746 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:c5:bc 10.100.0.4'], port_security=['fa:16:3e:92:c5:bc 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b2337e1c-84ef-4aa5-9a78-05dee2b3b853', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18ef4241-0151-441c-abdc-42d4b3a21b30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a345bddd4804404a55948133ea8150f', 'neutron:revision_number': '17', 'neutron:security_group_ids': '9689e159-b15c-4e2f-9c7b-a0ccf3c3f578', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ed87cdf-61a6-4df0-ac74-37289a0bcd5f, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=86ba251b-c492-47f6-9c92-65be385d4bab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:28:15 compute-0 ovn_controller[19752]: 2025-10-09T16:28:15Z|00173|binding|INFO|Setting lport 86ba251b-c492-47f6-9c92-65be385d4bab ovn-installed in OVS
Oct 09 16:28:15 compute-0 nova_compute[117331]: 2025-10-09 16:28:15.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:15 compute-0 ovn_controller[19752]: 2025-10-09T16:28:15Z|00174|binding|INFO|Setting lport 86ba251b-c492-47f6-9c92-65be385d4bab up in Southbound
Oct 09 16:28:15 compute-0 ovn_controller[19752]: 2025-10-09T16:28:15Z|00175|binding|INFO|Releasing lport 86ba251b-c492-47f6-9c92-65be385d4bab from this chassis (sb_readonly=1)
Oct 09 16:28:15 compute-0 ovn_controller[19752]: 2025-10-09T16:28:15Z|00176|if_status|INFO|Not setting lport 86ba251b-c492-47f6-9c92-65be385d4bab down as sb is readonly
Oct 09 16:28:15 compute-0 ovn_controller[19752]: 2025-10-09T16:28:15Z|00177|binding|INFO|Removing iface tap86ba251b-c4 ovn-installed in OVS
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.752 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18ef4241-00, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.752 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:28:15 compute-0 nova_compute[117331]: 2025-10-09 16:28:15.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.752 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18ef4241-00, col_values=(('external_ids', {'iface-id': '2ea79927-f6b6-48ed-a992-4066429c8e5d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.753 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.754 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[15efacad-a7bb-477f-bf43-211502ec8ac4]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-18ef4241-0151-441c-abdc-42d4b3a21b30\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 18ef4241-0151-441c-abdc-42d4b3a21b30\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.756 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 86ba251b-c492-47f6-9c92-65be385d4bab in datapath 18ef4241-0151-441c-abdc-42d4b3a21b30 bound to our chassis
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.757 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18ef4241-0151-441c-abdc-42d4b3a21b30
Oct 09 16:28:15 compute-0 ovn_controller[19752]: 2025-10-09T16:28:15Z|00178|binding|INFO|Releasing lport 86ba251b-c492-47f6-9c92-65be385d4bab from this chassis (sb_readonly=0)
Oct 09 16:28:15 compute-0 ovn_controller[19752]: 2025-10-09T16:28:15Z|00179|binding|INFO|Setting lport 86ba251b-c492-47f6-9c92-65be385d4bab down in Southbound
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.766 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:c5:bc 10.100.0.4'], port_security=['fa:16:3e:92:c5:bc 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b2337e1c-84ef-4aa5-9a78-05dee2b3b853', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18ef4241-0151-441c-abdc-42d4b3a21b30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a345bddd4804404a55948133ea8150f', 'neutron:revision_number': '17', 'neutron:security_group_ids': '9689e159-b15c-4e2f-9c7b-a0ccf3c3f578', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ed87cdf-61a6-4df0-ac74-37289a0bcd5f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=86ba251b-c492-47f6-9c92-65be385d4bab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:28:15 compute-0 nova_compute[117331]: 2025-10-09 16:28:15.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.774 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[95b969e1-e384-4329-8049-55add6a9e83c]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:15 compute-0 nova_compute[117331]: 2025-10-09 16:28:15.794 2 INFO nova.virt.libvirt.driver [-] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Instance destroyed successfully.
Oct 09 16:28:15 compute-0 nova_compute[117331]: 2025-10-09 16:28:15.795 2 DEBUG nova.objects.instance [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lazy-loading 'resources' on Instance uuid b2337e1c-84ef-4aa5-9a78-05dee2b3b853 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.804 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[218b175e-e70d-4d53-80bd-446d6fbc8ce9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.807 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[3dd3c8c5-a4a1-44ed-a6e6-2662535d66dc]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.836 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[6a8c04d8-3580-4359-90c4-f093e005253c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.856 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[5f741afc-a9a2-4020-8a4c-b9253dfa3d0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18ef4241-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:5e:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 9, 'rx_bytes': 1756, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 9, 'rx_bytes': 1756, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 217637, 'reachable_time': 15240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 148493, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.876 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a9c248-ce0e-4022-a2bc-9dadc2386f03]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap18ef4241-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 217648, 'tstamp': 217648}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 148494, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap18ef4241-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 217652, 'tstamp': 217652}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 148494, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.878 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18ef4241-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:28:15 compute-0 nova_compute[117331]: 2025-10-09 16:28:15.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:15 compute-0 nova_compute[117331]: 2025-10-09 16:28:15.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.884 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18ef4241-00, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.885 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.885 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18ef4241-00, col_values=(('external_ids', {'iface-id': '2ea79927-f6b6-48ed-a992-4066429c8e5d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.885 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.886 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[00e7161b-bf52-4315-a869-fc79ff17b468]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-18ef4241-0151-441c-abdc-42d4b3a21b30\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 18ef4241-0151-441c-abdc-42d4b3a21b30\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.887 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 86ba251b-c492-47f6-9c92-65be385d4bab in datapath 18ef4241-0151-441c-abdc-42d4b3a21b30 unbound from our chassis
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.888 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18ef4241-0151-441c-abdc-42d4b3a21b30
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.904 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8ee393-8fcd-4b79-97cc-9a9c8d23f960]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.933 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[98836a62-6793-4036-bc49-d3e7516bef6e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.937 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[d95b90cf-950b-4267-9b8e-a3cebafb423c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.970 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[8c9a3bb4-baff-4fdf-9caf-275eba49e02c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:15.988 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[daf90fe4-f381-4a7c-b0f1-1ae5dfde7607]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18ef4241-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:5e:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 11, 'rx_bytes': 1756, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 11, 'rx_bytes': 1756, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 217637, 'reachable_time': 15240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 148501, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:16 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:16.009 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[b625ff68-fb4f-48f2-978d-a3fbeb18f8a2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap18ef4241-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 217648, 'tstamp': 217648}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 148502, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap18ef4241-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 217652, 'tstamp': 217652}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 148502, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:16 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:16.011 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18ef4241-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:16 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:16.016 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18ef4241-00, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:28:16 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:16.017 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:28:16 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:16.017 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18ef4241-00, col_values=(('external_ids', {'iface-id': '2ea79927-f6b6-48ed-a992-4066429c8e5d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:28:16 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:16.017 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:28:16 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:16.019 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[cbb8f94f-f850-41ba-b35d-18cd19633245]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-18ef4241-0151-441c-abdc-42d4b3a21b30\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 18ef4241-0151-441c-abdc-42d4b3a21b30\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.304 2 DEBUG nova.virt.libvirt.vif [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=1,config_drive='True',created_at=2025-10-09T16:27:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-2068562316',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-206',id=19,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:27:33Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a345bddd4804404a55948133ea8150f',ramdisk_id='',reservation_id='r-9xxx25dz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',clean_attempts='1',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:28:12Z,user_data=None,user_id='c2c0f7c2f15e4da6881dc393064b0e16',uuid=b2337e1c-84ef-4aa5-9a78-05dee2b3b853,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86ba251b-c492-47f6-9c92-65be385d4bab", "address": "fa:16:3e:92:c5:bc", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86ba251b-c4", "ovs_interfaceid": "86ba251b-c492-47f6-9c92-65be385d4bab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.304 2 DEBUG nova.network.os_vif_util [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Converting VIF {"id": "86ba251b-c492-47f6-9c92-65be385d4bab", "address": "fa:16:3e:92:c5:bc", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86ba251b-c4", "ovs_interfaceid": "86ba251b-c492-47f6-9c92-65be385d4bab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.305 2 DEBUG nova.network.os_vif_util [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:92:c5:bc,bridge_name='br-int',has_traffic_filtering=True,id=86ba251b-c492-47f6-9c92-65be385d4bab,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86ba251b-c4') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.306 2 DEBUG os_vif [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:92:c5:bc,bridge_name='br-int',has_traffic_filtering=True,id=86ba251b-c492-47f6-9c92-65be385d4bab,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86ba251b-c4') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.309 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86ba251b-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.313 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=6866779d-ea26-4fe4-a0fa-2a2441d109f9) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.320 2 INFO os_vif [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:92:c5:bc,bridge_name='br-int',has_traffic_filtering=True,id=86ba251b-c492-47f6-9c92-65be385d4bab,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86ba251b-c4')
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.321 2 INFO nova.virt.libvirt.driver [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Deleting instance files /var/lib/nova/instances/b2337e1c-84ef-4aa5-9a78-05dee2b3b853_del
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.322 2 INFO nova.virt.libvirt.driver [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Deletion of /var/lib/nova/instances/b2337e1c-84ef-4aa5-9a78-05dee2b3b853_del complete
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.857 2 INFO nova.compute.manager [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Took 1.39 seconds to destroy the instance on the hypervisor.
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.858 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.858 2 DEBUG nova.compute.manager [-] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.859 2 DEBUG nova.network.neutron [-] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 09 16:28:16 compute-0 nova_compute[117331]: 2025-10-09 16:28:16.859 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:28:16 compute-0 podman[148504]: 2025-10-09 16:28:16.886259426 +0000 UTC m=+0.118745281 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.004 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.322 2 DEBUG nova.compute.manager [req-d353887e-ade9-4029-b63e-d2d338f942c8 req-7efa466f-4cf2-4403-b1ad-d4596402155b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Received event network-vif-deleted-86ba251b-c492-47f6-9c92-65be385d4bab external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.322 2 INFO nova.compute.manager [req-d353887e-ade9-4029-b63e-d2d338f942c8 req-7efa466f-4cf2-4403-b1ad-d4596402155b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Neutron deleted interface 86ba251b-c492-47f6-9c92-65be385d4bab; detaching it from the instance and deleting it from the info cache
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.322 2 DEBUG nova.network.neutron [req-d353887e-ade9-4029-b63e-d2d338f942c8 req-7efa466f-4cf2-4403-b1ad-d4596402155b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.769 2 DEBUG nova.network.neutron [-] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.782 2 DEBUG nova.compute.manager [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Received event network-vif-unplugged-86ba251b-c492-47f6-9c92-65be385d4bab external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.783 2 DEBUG oslo_concurrency.lockutils [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.783 2 DEBUG oslo_concurrency.lockutils [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.784 2 DEBUG oslo_concurrency.lockutils [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.784 2 DEBUG nova.compute.manager [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] No waiting events found dispatching network-vif-unplugged-86ba251b-c492-47f6-9c92-65be385d4bab pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.784 2 DEBUG nova.compute.manager [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Received event network-vif-unplugged-86ba251b-c492-47f6-9c92-65be385d4bab for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.785 2 DEBUG nova.compute.manager [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Received event network-vif-plugged-86ba251b-c492-47f6-9c92-65be385d4bab external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.785 2 DEBUG oslo_concurrency.lockutils [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.786 2 DEBUG oslo_concurrency.lockutils [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.786 2 DEBUG oslo_concurrency.lockutils [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.786 2 DEBUG nova.compute.manager [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] No waiting events found dispatching network-vif-plugged-86ba251b-c492-47f6-9c92-65be385d4bab pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.787 2 WARNING nova.compute.manager [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Received unexpected event network-vif-plugged-86ba251b-c492-47f6-9c92-65be385d4bab for instance with vm_state active and task_state deleting.
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.787 2 DEBUG nova.compute.manager [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Received event network-vif-plugged-86ba251b-c492-47f6-9c92-65be385d4bab external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.787 2 DEBUG oslo_concurrency.lockutils [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.788 2 DEBUG oslo_concurrency.lockutils [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.788 2 DEBUG oslo_concurrency.lockutils [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.789 2 DEBUG nova.compute.manager [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] No waiting events found dispatching network-vif-plugged-86ba251b-c492-47f6-9c92-65be385d4bab pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.789 2 WARNING nova.compute.manager [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Received unexpected event network-vif-plugged-86ba251b-c492-47f6-9c92-65be385d4bab for instance with vm_state active and task_state deleting.
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.789 2 DEBUG nova.compute.manager [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Received event network-vif-unplugged-86ba251b-c492-47f6-9c92-65be385d4bab external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.790 2 DEBUG oslo_concurrency.lockutils [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.790 2 DEBUG oslo_concurrency.lockutils [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.791 2 DEBUG oslo_concurrency.lockutils [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.791 2 DEBUG nova.compute.manager [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] No waiting events found dispatching network-vif-unplugged-86ba251b-c492-47f6-9c92-65be385d4bab pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.791 2 DEBUG nova.compute.manager [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Received event network-vif-unplugged-86ba251b-c492-47f6-9c92-65be385d4bab for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.792 2 DEBUG nova.compute.manager [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Received event network-vif-unplugged-86ba251b-c492-47f6-9c92-65be385d4bab external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.792 2 DEBUG oslo_concurrency.lockutils [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.793 2 DEBUG oslo_concurrency.lockutils [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.793 2 DEBUG oslo_concurrency.lockutils [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.793 2 DEBUG nova.compute.manager [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] No waiting events found dispatching network-vif-unplugged-86ba251b-c492-47f6-9c92-65be385d4bab pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.794 2 DEBUG nova.compute.manager [req-833e724e-89ed-47c1-b90b-6fb07f7757c1 req-cb50b04e-db94-44b2-a9f5-51732ebbabe8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Received event network-vif-unplugged-86ba251b-c492-47f6-9c92-65be385d4bab for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:28:17 compute-0 nova_compute[117331]: 2025-10-09 16:28:17.831 2 DEBUG nova.compute.manager [req-d353887e-ade9-4029-b63e-d2d338f942c8 req-7efa466f-4cf2-4403-b1ad-d4596402155b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Detach interface failed, port_id=86ba251b-c492-47f6-9c92-65be385d4bab, reason: Instance b2337e1c-84ef-4aa5-9a78-05dee2b3b853 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 09 16:28:18 compute-0 nova_compute[117331]: 2025-10-09 16:28:18.276 2 INFO nova.compute.manager [-] [instance: b2337e1c-84ef-4aa5-9a78-05dee2b3b853] Took 1.42 seconds to deallocate network for instance.
Oct 09 16:28:18 compute-0 nova_compute[117331]: 2025-10-09 16:28:18.793 2 DEBUG oslo_concurrency.lockutils [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:28:18 compute-0 nova_compute[117331]: 2025-10-09 16:28:18.794 2 DEBUG oslo_concurrency.lockutils [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:28:18 compute-0 nova_compute[117331]: 2025-10-09 16:28:18.801 2 DEBUG oslo_concurrency.lockutils [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.007s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:28:18 compute-0 nova_compute[117331]: 2025-10-09 16:28:18.856 2 INFO nova.scheduler.client.report [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Deleted allocations for instance b2337e1c-84ef-4aa5-9a78-05dee2b3b853
Oct 09 16:28:19 compute-0 nova_compute[117331]: 2025-10-09 16:28:19.887 2 DEBUG oslo_concurrency.lockutils [None req-8397835c-a34e-414e-a930-fc39e9e5010a c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "b2337e1c-84ef-4aa5-9a78-05dee2b3b853" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.957s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:28:20 compute-0 nova_compute[117331]: 2025-10-09 16:28:20.610 2 DEBUG oslo_concurrency.lockutils [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "f156c348-19a2-41d9-bf57-79cea2a84e3c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:28:20 compute-0 nova_compute[117331]: 2025-10-09 16:28:20.611 2 DEBUG oslo_concurrency.lockutils [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "f156c348-19a2-41d9-bf57-79cea2a84e3c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:28:20 compute-0 nova_compute[117331]: 2025-10-09 16:28:20.611 2 DEBUG oslo_concurrency.lockutils [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "f156c348-19a2-41d9-bf57-79cea2a84e3c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:28:20 compute-0 nova_compute[117331]: 2025-10-09 16:28:20.611 2 DEBUG oslo_concurrency.lockutils [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "f156c348-19a2-41d9-bf57-79cea2a84e3c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:28:20 compute-0 nova_compute[117331]: 2025-10-09 16:28:20.612 2 DEBUG oslo_concurrency.lockutils [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "f156c348-19a2-41d9-bf57-79cea2a84e3c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:28:20 compute-0 nova_compute[117331]: 2025-10-09 16:28:20.624 2 INFO nova.compute.manager [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Terminating instance
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.141 2 DEBUG nova.compute.manager [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 09 16:28:21 compute-0 kernel: tap9ce16b2c-f4 (unregistering): left promiscuous mode
Oct 09 16:28:21 compute-0 NetworkManager[1028]: <info>  [1760027301.1641] device (tap9ce16b2c-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:21 compute-0 ovn_controller[19752]: 2025-10-09T16:28:21Z|00180|binding|INFO|Releasing lport 9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 from this chassis (sb_readonly=0)
Oct 09 16:28:21 compute-0 ovn_controller[19752]: 2025-10-09T16:28:21Z|00181|binding|INFO|Setting lport 9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 down in Southbound
Oct 09 16:28:21 compute-0 ovn_controller[19752]: 2025-10-09T16:28:21Z|00182|binding|INFO|Removing iface tap9ce16b2c-f4 ovn-installed in OVS
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:21.185 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:95:b3 10.100.0.8'], port_security=['fa:16:3e:ba:95:b3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f156c348-19a2-41d9-bf57-79cea2a84e3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18ef4241-0151-441c-abdc-42d4b3a21b30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a345bddd4804404a55948133ea8150f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9689e159-b15c-4e2f-9c7b-a0ccf3c3f578', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ed87cdf-61a6-4df0-ac74-37289a0bcd5f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=9ce16b2c-f453-4ee6-ab7f-159c6386c5d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:28:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:21.187 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 in datapath 18ef4241-0151-441c-abdc-42d4b3a21b30 unbound from our chassis
Oct 09 16:28:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:21.191 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 18ef4241-0151-441c-abdc-42d4b3a21b30, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:28:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:21.197 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[48b38388-5bd0-4e77-9521-446d270aea61]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:21.197 28613 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30 namespace which is not needed anymore
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:21 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct 09 16:28:21 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000012.scope: Consumed 14.539s CPU time.
Oct 09 16:28:21 compute-0 systemd-machined[77487]: Machine qemu-12-instance-00000012 terminated.
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:21 compute-0 neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30[148093]: [NOTICE]   (148097) : haproxy version is 3.0.5-8e879a5
Oct 09 16:28:21 compute-0 neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30[148093]: [NOTICE]   (148097) : path to executable is /usr/sbin/haproxy
Oct 09 16:28:21 compute-0 neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30[148093]: [WARNING]  (148097) : Exiting Master process...
Oct 09 16:28:21 compute-0 podman[148551]: 2025-10-09 16:28:21.389175391 +0000 UTC m=+0.054452661 container kill 9b48415012573e8086a1e52cf3c9b14f3393836867721c199702295debe01f5c (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Oct 09 16:28:21 compute-0 neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30[148093]: [ALERT]    (148097) : Current worker (148099) exited with code 143 (Terminated)
Oct 09 16:28:21 compute-0 neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30[148093]: [WARNING]  (148097) : All workers exited. Exiting... (0)
Oct 09 16:28:21 compute-0 systemd[1]: libpod-9b48415012573e8086a1e52cf3c9b14f3393836867721c199702295debe01f5c.scope: Deactivated successfully.
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.417 2 INFO nova.virt.libvirt.driver [-] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Instance destroyed successfully.
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.417 2 DEBUG nova.objects.instance [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lazy-loading 'resources' on Instance uuid f156c348-19a2-41d9-bf57-79cea2a84e3c obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:28:21 compute-0 podman[148579]: 2025-10-09 16:28:21.437663501 +0000 UTC m=+0.025173736 container died 9b48415012573e8086a1e52cf3c9b14f3393836867721c199702295debe01f5c (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007)
Oct 09 16:28:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9b48415012573e8086a1e52cf3c9b14f3393836867721c199702295debe01f5c-userdata-shm.mount: Deactivated successfully.
Oct 09 16:28:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e29f697265089460e0b4ba52a12c3dbc036e5fa63ffd678932ae0f59d3bd447-merged.mount: Deactivated successfully.
Oct 09 16:28:21 compute-0 podman[148579]: 2025-10-09 16:28:21.478654246 +0000 UTC m=+0.066164471 container cleanup 9b48415012573e8086a1e52cf3c9b14f3393836867721c199702295debe01f5c (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0)
Oct 09 16:28:21 compute-0 systemd[1]: libpod-conmon-9b48415012573e8086a1e52cf3c9b14f3393836867721c199702295debe01f5c.scope: Deactivated successfully.
Oct 09 16:28:21 compute-0 podman[148589]: 2025-10-09 16:28:21.505421811 +0000 UTC m=+0.069710222 container remove 9b48415012573e8086a1e52cf3c9b14f3393836867721c199702295debe01f5c (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007)
Oct 09 16:28:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:21.525 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[646d3105-50d9-493c-be47-99506eab225c]: (4, ("Thu Oct  9 04:28:21 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30 (9b48415012573e8086a1e52cf3c9b14f3393836867721c199702295debe01f5c)\n9b48415012573e8086a1e52cf3c9b14f3393836867721c199702295debe01f5c\nThu Oct  9 04:28:21 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30 (9b48415012573e8086a1e52cf3c9b14f3393836867721c199702295debe01f5c)\n9b48415012573e8086a1e52cf3c9b14f3393836867721c199702295debe01f5c\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:21.527 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a74f37b0-56a3-49ed-949c-f2324bc28f97]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:21.527 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/18ef4241-0151-441c-abdc-42d4b3a21b30.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:28:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:21.528 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[39782714-8565-4a93-a99b-985d15746da2]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:21.528 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18ef4241-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:21 compute-0 kernel: tap18ef4241-00: left promiscuous mode
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.538 2 DEBUG nova.compute.manager [req-c3d1c351-e308-451e-9815-ede2f7697d71 req-567bdddb-8ebd-4a62-8433-182a11c16692 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Received event network-vif-unplugged-9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.538 2 DEBUG oslo_concurrency.lockutils [req-c3d1c351-e308-451e-9815-ede2f7697d71 req-567bdddb-8ebd-4a62-8433-182a11c16692 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "f156c348-19a2-41d9-bf57-79cea2a84e3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.538 2 DEBUG oslo_concurrency.lockutils [req-c3d1c351-e308-451e-9815-ede2f7697d71 req-567bdddb-8ebd-4a62-8433-182a11c16692 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "f156c348-19a2-41d9-bf57-79cea2a84e3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.539 2 DEBUG oslo_concurrency.lockutils [req-c3d1c351-e308-451e-9815-ede2f7697d71 req-567bdddb-8ebd-4a62-8433-182a11c16692 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "f156c348-19a2-41d9-bf57-79cea2a84e3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.539 2 DEBUG nova.compute.manager [req-c3d1c351-e308-451e-9815-ede2f7697d71 req-567bdddb-8ebd-4a62-8433-182a11c16692 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] No waiting events found dispatching network-vif-unplugged-9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.539 2 DEBUG nova.compute.manager [req-c3d1c351-e308-451e-9815-ede2f7697d71 req-567bdddb-8ebd-4a62-8433-182a11c16692 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Received event network-vif-unplugged-9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:21.553 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ea7e302f-e31b-4933-a06b-e5069ea479f5]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:21.589 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d905c78d-d16b-4b84-976c-db8ff961cffd]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:21.591 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ae255122-ea47-4806-8f83-625ead755f85]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:21.614 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a921c986-e168-49b9-9453-a82996c33a29]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 217629, 'reachable_time': 43970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 148621, 'error': None, 'target': 'ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:21.619 28727 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-18ef4241-0151-441c-abdc-42d4b3a21b30 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 09 16:28:21 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:21.619 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[87b9e714-97bf-411c-bafa-bc7c078980d9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d18ef4241\x2d0151\x2d441c\x2dabdc\x2d42d4b3a21b30.mount: Deactivated successfully.
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.812 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.926 2 DEBUG nova.virt.libvirt.vif [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-1886277122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-188',id=18,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:27:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a345bddd4804404a55948133ea8150f',ramdisk_id='',reservation_id='r-ours3yn6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1915104332-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:27:14Z,user_data=None,user_id='c2c0f7c2f15e4da6881dc393064b0e16',uuid=f156c348-19a2-41d9-bf57-79cea2a84e3c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "address": "fa:16:3e:ba:95:b3", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ce16b2c-f4", "ovs_interfaceid": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.926 2 DEBUG nova.network.os_vif_util [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Converting VIF {"id": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "address": "fa:16:3e:ba:95:b3", "network": {"id": "18ef4241-0151-441c-abdc-42d4b3a21b30", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-129271777-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2c321aa8a494a8e8b49c81b79e3ceca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ce16b2c-f4", "ovs_interfaceid": "9ce16b2c-f453-4ee6-ab7f-159c6386c5d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.927 2 DEBUG nova.network.os_vif_util [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:95:b3,bridge_name='br-int',has_traffic_filtering=True,id=9ce16b2c-f453-4ee6-ab7f-159c6386c5d3,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ce16b2c-f4') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.928 2 DEBUG os_vif [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:95:b3,bridge_name='br-int',has_traffic_filtering=True,id=9ce16b2c-f453-4ee6-ab7f-159c6386c5d3,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ce16b2c-f4') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.930 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ce16b2c-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.934 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=819a41c3-ce7d-494b-bea6-f70c2e945dd1) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.938 2 INFO os_vif [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:95:b3,bridge_name='br-int',has_traffic_filtering=True,id=9ce16b2c-f453-4ee6-ab7f-159c6386c5d3,network=Network(18ef4241-0151-441c-abdc-42d4b3a21b30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ce16b2c-f4')
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.939 2 INFO nova.virt.libvirt.driver [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Deleting instance files /var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c_del
Oct 09 16:28:21 compute-0 nova_compute[117331]: 2025-10-09 16:28:21.940 2 INFO nova.virt.libvirt.driver [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Deletion of /var/lib/nova/instances/f156c348-19a2-41d9-bf57-79cea2a84e3c_del complete
Oct 09 16:28:22 compute-0 nova_compute[117331]: 2025-10-09 16:28:22.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:28:22 compute-0 nova_compute[117331]: 2025-10-09 16:28:22.453 2 INFO nova.compute.manager [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Took 1.31 seconds to destroy the instance on the hypervisor.
Oct 09 16:28:22 compute-0 nova_compute[117331]: 2025-10-09 16:28:22.453 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 09 16:28:22 compute-0 nova_compute[117331]: 2025-10-09 16:28:22.453 2 DEBUG nova.compute.manager [-] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 09 16:28:22 compute-0 nova_compute[117331]: 2025-10-09 16:28:22.453 2 DEBUG nova.network.neutron [-] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 09 16:28:22 compute-0 nova_compute[117331]: 2025-10-09 16:28:22.454 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:28:22 compute-0 podman[148623]: 2025-10-09 16:28:22.864566225 +0000 UTC m=+0.078632514 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct 09 16:28:22 compute-0 podman[148622]: 2025-10-09 16:28:22.87297748 +0000 UTC m=+0.085621225 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.014 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.750 2 DEBUG nova.network.neutron [-] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.774 2 DEBUG nova.compute.manager [req-b5e68dce-8c93-4fcc-ac81-2d798749d516 req-242da687-ce31-45d3-bdbe-41322f97a6cf ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Received event network-vif-unplugged-9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.775 2 DEBUG oslo_concurrency.lockutils [req-b5e68dce-8c93-4fcc-ac81-2d798749d516 req-242da687-ce31-45d3-bdbe-41322f97a6cf ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "f156c348-19a2-41d9-bf57-79cea2a84e3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.775 2 DEBUG oslo_concurrency.lockutils [req-b5e68dce-8c93-4fcc-ac81-2d798749d516 req-242da687-ce31-45d3-bdbe-41322f97a6cf ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "f156c348-19a2-41d9-bf57-79cea2a84e3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.776 2 DEBUG oslo_concurrency.lockutils [req-b5e68dce-8c93-4fcc-ac81-2d798749d516 req-242da687-ce31-45d3-bdbe-41322f97a6cf ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "f156c348-19a2-41d9-bf57-79cea2a84e3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.776 2 DEBUG nova.compute.manager [req-b5e68dce-8c93-4fcc-ac81-2d798749d516 req-242da687-ce31-45d3-bdbe-41322f97a6cf ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] No waiting events found dispatching network-vif-unplugged-9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.777 2 DEBUG nova.compute.manager [req-b5e68dce-8c93-4fcc-ac81-2d798749d516 req-242da687-ce31-45d3-bdbe-41322f97a6cf ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Received event network-vif-unplugged-9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.777 2 DEBUG nova.compute.manager [req-b5e68dce-8c93-4fcc-ac81-2d798749d516 req-242da687-ce31-45d3-bdbe-41322f97a6cf ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Received event network-vif-deleted-9ce16b2c-f453-4ee6-ab7f-159c6386c5d3 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.777 2 INFO nova.compute.manager [req-b5e68dce-8c93-4fcc-ac81-2d798749d516 req-242da687-ce31-45d3-bdbe-41322f97a6cf ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Neutron deleted interface 9ce16b2c-f453-4ee6-ab7f-159c6386c5d3; detaching it from the instance and deleting it from the info cache
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.778 2 DEBUG nova.network.neutron [req-b5e68dce-8c93-4fcc-ac81-2d798749d516 req-242da687-ce31-45d3-bdbe-41322f97a6cf ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.821 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.821 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.978 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.979 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.996 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.017s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.996 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6106MB free_disk=73.25742721557617GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.996 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:28:23 compute-0 nova_compute[117331]: 2025-10-09 16:28:23.997 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:28:24 compute-0 nova_compute[117331]: 2025-10-09 16:28:24.257 2 INFO nova.compute.manager [-] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Took 1.80 seconds to deallocate network for instance.
Oct 09 16:28:24 compute-0 nova_compute[117331]: 2025-10-09 16:28:24.287 2 DEBUG nova.compute.manager [req-b5e68dce-8c93-4fcc-ac81-2d798749d516 req-242da687-ce31-45d3-bdbe-41322f97a6cf ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: f156c348-19a2-41d9-bf57-79cea2a84e3c] Detach interface failed, port_id=9ce16b2c-f453-4ee6-ab7f-159c6386c5d3, reason: Instance f156c348-19a2-41d9-bf57-79cea2a84e3c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 09 16:28:24 compute-0 nova_compute[117331]: 2025-10-09 16:28:24.779 2 DEBUG oslo_concurrency.lockutils [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:28:25 compute-0 nova_compute[117331]: 2025-10-09 16:28:25.037 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance f156c348-19a2-41d9-bf57-79cea2a84e3c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:28:25 compute-0 nova_compute[117331]: 2025-10-09 16:28:25.037 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:28:25 compute-0 nova_compute[117331]: 2025-10-09 16:28:25.037 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:28:23 up 37 min,  0 user,  load average: 0.47, 0.57, 0.43\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_deleting': '1', 'num_os_type_None': '1', 'num_proj_1a345bddd4804404a55948133ea8150f': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:28:25 compute-0 nova_compute[117331]: 2025-10-09 16:28:25.132 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:28:25 compute-0 nova_compute[117331]: 2025-10-09 16:28:25.644 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:28:26 compute-0 nova_compute[117331]: 2025-10-09 16:28:26.154 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:28:26 compute-0 nova_compute[117331]: 2025-10-09 16:28:26.155 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.158s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:28:26 compute-0 nova_compute[117331]: 2025-10-09 16:28:26.155 2 DEBUG oslo_concurrency.lockutils [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.376s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:28:26 compute-0 nova_compute[117331]: 2025-10-09 16:28:26.207 2 DEBUG nova.compute.provider_tree [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:28:26 compute-0 nova_compute[117331]: 2025-10-09 16:28:26.716 2 DEBUG nova.scheduler.client.report [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:28:26 compute-0 nova_compute[117331]: 2025-10-09 16:28:26.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:26 compute-0 nova_compute[117331]: 2025-10-09 16:28:26.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:27 compute-0 unix_chkpwd[148666]: password check failed for user (root)
Oct 09 16:28:27 compute-0 sshd-session[148662]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32  user=root
Oct 09 16:28:27 compute-0 nova_compute[117331]: 2025-10-09 16:28:27.226 2 DEBUG oslo_concurrency.lockutils [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.071s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:28:27 compute-0 nova_compute[117331]: 2025-10-09 16:28:27.256 2 INFO nova.scheduler.client.report [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Deleted allocations for instance f156c348-19a2-41d9-bf57-79cea2a84e3c
Oct 09 16:28:28 compute-0 nova_compute[117331]: 2025-10-09 16:28:28.156 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:28:28 compute-0 nova_compute[117331]: 2025-10-09 16:28:28.156 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:28:28 compute-0 nova_compute[117331]: 2025-10-09 16:28:28.157 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:28:28 compute-0 nova_compute[117331]: 2025-10-09 16:28:28.157 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:28:28 compute-0 nova_compute[117331]: 2025-10-09 16:28:28.288 2 DEBUG oslo_concurrency.lockutils [None req-643d7905-78c4-4114-85e6-63c0e0228344 c2c0f7c2f15e4da6881dc393064b0e16 1a345bddd4804404a55948133ea8150f - - default default] Lock "f156c348-19a2-41d9-bf57-79cea2a84e3c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.677s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:28:28 compute-0 nova_compute[117331]: 2025-10-09 16:28:28.303 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:28:29 compute-0 sshd-session[148662]: Failed password for root from 36.224.53.32 port 49306 ssh2
Oct 09 16:28:29 compute-0 nova_compute[117331]: 2025-10-09 16:28:29.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:28:29 compute-0 podman[127775]: time="2025-10-09T16:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:28:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:28:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3028 "" "Go-http-client/1.1"
Oct 09 16:28:31 compute-0 openstack_network_exporter[129925]: ERROR   16:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:28:31 compute-0 openstack_network_exporter[129925]: ERROR   16:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:28:31 compute-0 openstack_network_exporter[129925]: ERROR   16:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:28:31 compute-0 openstack_network_exporter[129925]: ERROR   16:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:28:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:28:31 compute-0 openstack_network_exporter[129925]: ERROR   16:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:28:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:28:31 compute-0 podman[148667]: 2025-10-09 16:28:31.528354294 +0000 UTC m=+0.072165540 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, name=ubi9-minimal)
Oct 09 16:28:31 compute-0 podman[148668]: 2025-10-09 16:28:31.582028619 +0000 UTC m=+0.114810507 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251007)
Oct 09 16:28:31 compute-0 nova_compute[117331]: 2025-10-09 16:28:31.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:31 compute-0 nova_compute[117331]: 2025-10-09 16:28:31.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:32 compute-0 sshd-session[148662]: Connection closed by authenticating user root 36.224.53.32 port 49306 [preauth]
Oct 09 16:28:33 compute-0 nova_compute[117331]: 2025-10-09 16:28:33.304 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:28:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:35.320 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:28:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:35.320 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:28:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:35.320 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:28:36 compute-0 sshd-session[148713]: Invalid user admin from 36.224.53.32 port 60172
Oct 09 16:28:36 compute-0 nova_compute[117331]: 2025-10-09 16:28:36.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:36 compute-0 nova_compute[117331]: 2025-10-09 16:28:36.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:37 compute-0 sshd-session[148713]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:28:37 compute-0 sshd-session[148713]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:28:39 compute-0 sshd-session[148713]: Failed password for invalid user admin from 36.224.53.32 port 60172 ssh2
Oct 09 16:28:41 compute-0 sshd-session[148713]: Connection closed by invalid user admin 36.224.53.32 port 60172 [preauth]
Oct 09 16:28:41 compute-0 nova_compute[117331]: 2025-10-09 16:28:41.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:41 compute-0 nova_compute[117331]: 2025-10-09 16:28:41.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:44 compute-0 podman[148718]: 2025-10-09 16:28:44.86017781 +0000 UTC m=+0.095281511 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_id=multipathd)
Oct 09 16:28:46 compute-0 nova_compute[117331]: 2025-10-09 16:28:46.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:46 compute-0 nova_compute[117331]: 2025-10-09 16:28:46.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:47 compute-0 sshd-session[148716]: Invalid user zjw from 36.224.53.32 port 40900
Oct 09 16:28:47 compute-0 podman[148739]: 2025-10-09 16:28:47.072614006 +0000 UTC m=+0.046096027 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:28:47 compute-0 nova_compute[117331]: 2025-10-09 16:28:47.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:47 compute-0 sshd-session[148716]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:28:47 compute-0 sshd-session[148716]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:28:50 compute-0 sshd-session[148716]: Failed password for invalid user zjw from 36.224.53.32 port 40900 ssh2
Oct 09 16:28:51 compute-0 sshd-session[148716]: Connection closed by invalid user zjw 36.224.53.32 port 40900 [preauth]
Oct 09 16:28:51 compute-0 nova_compute[117331]: 2025-10-09 16:28:51.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:51 compute-0 nova_compute[117331]: 2025-10-09 16:28:51.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:51 compute-0 sshd-session[148763]: Invalid user docker from 36.224.53.32 port 48482
Oct 09 16:28:52 compute-0 sshd-session[148763]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:28:52 compute-0 sshd-session[148763]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:28:53 compute-0 podman[148765]: 2025-10-09 16:28:53.850874451 +0000 UTC m=+0.078538131 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Oct 09 16:28:53 compute-0 podman[148766]: 2025-10-09 16:28:53.876944385 +0000 UTC m=+0.091266932 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, container_name=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007)
Oct 09 16:28:54 compute-0 sshd-session[148763]: Failed password for invalid user docker from 36.224.53.32 port 48482 ssh2
Oct 09 16:28:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:55.728 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:14:aa 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84247d99-b9fb-4f75-af06-dd3e92557a34', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f838d4a2de74d8fbb8e91e7ef351b24', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83685448-9277-4bf7-9118-ff77ca4cb703, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=091675d2-c61b-4a23-b064-d4ca0295fca5) old=Port_Binding(mac=['fa:16:3e:10:14:aa'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84247d99-b9fb-4f75-af06-dd3e92557a34', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f838d4a2de74d8fbb8e91e7ef351b24', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:28:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:55.729 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 091675d2-c61b-4a23-b064-d4ca0295fca5 in datapath 84247d99-b9fb-4f75-af06-dd3e92557a34 updated
Oct 09 16:28:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:55.731 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84247d99-b9fb-4f75-af06-dd3e92557a34, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:28:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:28:55.731 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[1aec7868-5d03-4290-8728-2a7c8a0a458a]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:28:56 compute-0 sshd-session[148763]: Connection closed by invalid user docker 36.224.53.32 port 48482 [preauth]
Oct 09 16:28:56 compute-0 nova_compute[117331]: 2025-10-09 16:28:56.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:56 compute-0 nova_compute[117331]: 2025-10-09 16:28:56.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:28:58 compute-0 sshd-session[148803]: Invalid user cloud from 36.224.53.32 port 55762
Oct 09 16:28:59 compute-0 podman[127775]: time="2025-10-09T16:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:28:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:28:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3031 "" "Go-http-client/1.1"
Oct 09 16:28:59 compute-0 sshd-session[148803]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:28:59 compute-0 sshd-session[148803]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:29:01 compute-0 openstack_network_exporter[129925]: ERROR   16:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:29:01 compute-0 openstack_network_exporter[129925]: ERROR   16:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:29:01 compute-0 openstack_network_exporter[129925]: ERROR   16:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:29:01 compute-0 openstack_network_exporter[129925]: ERROR   16:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:29:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:29:01 compute-0 openstack_network_exporter[129925]: ERROR   16:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:29:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:29:01 compute-0 podman[148805]: 2025-10-09 16:29:01.8463694 +0000 UTC m=+0.076127164 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, version=9.6, vendor=Red Hat, Inc., distribution-scope=public)
Oct 09 16:29:01 compute-0 nova_compute[117331]: 2025-10-09 16:29:01.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:01 compute-0 podman[148806]: 2025-10-09 16:29:01.922923587 +0000 UTC m=+0.138707350 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 09 16:29:01 compute-0 nova_compute[117331]: 2025-10-09 16:29:01.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:01 compute-0 sshd-session[148803]: Failed password for invalid user cloud from 36.224.53.32 port 55762 ssh2
Oct 09 16:29:02 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:02.206 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:29:02 compute-0 nova_compute[117331]: 2025-10-09 16:29:02.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:02 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:02.208 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:29:03 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:03.210 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:29:03 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:03.407 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:29:da 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e56af2a3-453c-4f26-b4c2-fb9b8df078c2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e56af2a3-453c-4f26-b4c2-fb9b8df078c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9d397897ee84f2e91c7e33f6c4052a3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d66d448c-4ab2-45fb-a2b0-2a2fd1094a6c, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b4bec131-80b2-4c9a-ad9b-67bb5cd2df30) old=Port_Binding(mac=['fa:16:3e:8a:29:da'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-e56af2a3-453c-4f26-b4c2-fb9b8df078c2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e56af2a3-453c-4f26-b4c2-fb9b8df078c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9d397897ee84f2e91c7e33f6c4052a3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:29:03 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:03.409 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b4bec131-80b2-4c9a-ad9b-67bb5cd2df30 in datapath e56af2a3-453c-4f26-b4c2-fb9b8df078c2 updated
Oct 09 16:29:03 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:03.411 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e56af2a3-453c-4f26-b4c2-fb9b8df078c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:29:03 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:03.411 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3cdb4dc0-8590-461c-be32-586a84f9120b]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:29:04 compute-0 sshd-session[148803]: Connection closed by invalid user cloud 36.224.53.32 port 55762 [preauth]
Oct 09 16:29:06 compute-0 nova_compute[117331]: 2025-10-09 16:29:06.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:06 compute-0 nova_compute[117331]: 2025-10-09 16:29:06.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:09 compute-0 sshd-session[148855]: Invalid user clouduser from 36.224.53.32 port 37486
Oct 09 16:29:10 compute-0 sshd-session[148855]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:29:10 compute-0 sshd-session[148855]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:29:11 compute-0 nova_compute[117331]: 2025-10-09 16:29:11.148 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Acquiring lock "144cc4ab-6004-4896-8899-8860557341b0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:29:11 compute-0 nova_compute[117331]: 2025-10-09 16:29:11.149 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "144cc4ab-6004-4896-8899-8860557341b0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:29:11 compute-0 nova_compute[117331]: 2025-10-09 16:29:11.655 2 DEBUG nova.compute.manager [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:29:11 compute-0 nova_compute[117331]: 2025-10-09 16:29:11.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:11 compute-0 nova_compute[117331]: 2025-10-09 16:29:11.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:12 compute-0 sshd-session[148855]: Failed password for invalid user clouduser from 36.224.53.32 port 37486 ssh2
Oct 09 16:29:12 compute-0 nova_compute[117331]: 2025-10-09 16:29:12.206 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:29:12 compute-0 nova_compute[117331]: 2025-10-09 16:29:12.207 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:29:12 compute-0 nova_compute[117331]: 2025-10-09 16:29:12.216 2 DEBUG nova.virt.hardware [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:29:12 compute-0 nova_compute[117331]: 2025-10-09 16:29:12.217 2 INFO nova.compute.claims [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:29:13 compute-0 nova_compute[117331]: 2025-10-09 16:29:13.322 2 DEBUG nova.compute.provider_tree [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:29:13 compute-0 sshd-session[148855]: Connection closed by invalid user clouduser 36.224.53.32 port 37486 [preauth]
Oct 09 16:29:13 compute-0 nova_compute[117331]: 2025-10-09 16:29:13.829 2 DEBUG nova.scheduler.client.report [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:29:14 compute-0 nova_compute[117331]: 2025-10-09 16:29:14.339 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.132s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:29:14 compute-0 nova_compute[117331]: 2025-10-09 16:29:14.340 2 DEBUG nova.compute.manager [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:29:14 compute-0 nova_compute[117331]: 2025-10-09 16:29:14.852 2 DEBUG nova.compute.manager [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:29:14 compute-0 nova_compute[117331]: 2025-10-09 16:29:14.852 2 DEBUG nova.network.neutron [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:29:14 compute-0 nova_compute[117331]: 2025-10-09 16:29:14.853 2 WARNING neutronclient.v2_0.client [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:29:14 compute-0 nova_compute[117331]: 2025-10-09 16:29:14.853 2 WARNING neutronclient.v2_0.client [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:29:15 compute-0 nova_compute[117331]: 2025-10-09 16:29:15.360 2 INFO nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:29:15 compute-0 nova_compute[117331]: 2025-10-09 16:29:15.662 2 DEBUG nova.network.neutron [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Successfully created port: 1e79672b-1776-4292-b861-3666e6b4dc69 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:29:15 compute-0 podman[148860]: 2025-10-09 16:29:15.851573607 +0000 UTC m=+0.080805902 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest)
Oct 09 16:29:15 compute-0 nova_compute[117331]: 2025-10-09 16:29:15.867 2 DEBUG nova.compute.manager [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.255 2 DEBUG nova.network.neutron [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Successfully updated port: 1e79672b-1776-4292-b861-3666e6b4dc69 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.337 2 DEBUG nova.compute.manager [req-efdcf1bd-9429-4f34-875b-108227cf4b61 req-54174cea-6b48-40b5-9350-c9e9d233919e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Received event network-changed-1e79672b-1776-4292-b861-3666e6b4dc69 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.338 2 DEBUG nova.compute.manager [req-efdcf1bd-9429-4f34-875b-108227cf4b61 req-54174cea-6b48-40b5-9350-c9e9d233919e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Refreshing instance network info cache due to event network-changed-1e79672b-1776-4292-b861-3666e6b4dc69. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.338 2 DEBUG oslo_concurrency.lockutils [req-efdcf1bd-9429-4f34-875b-108227cf4b61 req-54174cea-6b48-40b5-9350-c9e9d233919e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-144cc4ab-6004-4896-8899-8860557341b0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.338 2 DEBUG oslo_concurrency.lockutils [req-efdcf1bd-9429-4f34-875b-108227cf4b61 req-54174cea-6b48-40b5-9350-c9e9d233919e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-144cc4ab-6004-4896-8899-8860557341b0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.338 2 DEBUG nova.network.neutron [req-efdcf1bd-9429-4f34-875b-108227cf4b61 req-54174cea-6b48-40b5-9350-c9e9d233919e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Refreshing network info cache for port 1e79672b-1776-4292-b861-3666e6b4dc69 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.762 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Acquiring lock "refresh_cache-144cc4ab-6004-4896-8899-8860557341b0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.843 2 WARNING neutronclient.v2_0.client [req-efdcf1bd-9429-4f34-875b-108227cf4b61 req-54174cea-6b48-40b5-9350-c9e9d233919e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.882 2 DEBUG nova.compute.manager [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.883 2 DEBUG nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.884 2 INFO nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Creating image(s)
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.885 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Acquiring lock "/var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.885 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "/var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.886 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "/var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.887 2 DEBUG oslo_utils.imageutils.format_inspector [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.891 2 DEBUG oslo_utils.imageutils.format_inspector [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.893 2 DEBUG oslo_concurrency.processutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.976 2 DEBUG oslo_concurrency.processutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.977 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.978 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.979 2 DEBUG oslo_utils.imageutils.format_inspector [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.983 2 DEBUG oslo_utils.imageutils.format_inspector [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:29:16 compute-0 nova_compute[117331]: 2025-10-09 16:29:16.983 2 DEBUG oslo_concurrency.processutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.049 2 DEBUG oslo_concurrency.processutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.050 2 DEBUG oslo_concurrency.processutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.077 2 DEBUG nova.network.neutron [req-efdcf1bd-9429-4f34-875b-108227cf4b61 req-54174cea-6b48-40b5-9350-c9e9d233919e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.092 2 DEBUG oslo_concurrency.processutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.093 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.115s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.093 2 DEBUG oslo_concurrency.processutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.153 2 DEBUG oslo_concurrency.processutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.154 2 DEBUG nova.virt.disk.api [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Checking if we can resize image /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.155 2 DEBUG oslo_concurrency.processutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.227 2 DEBUG nova.network.neutron [req-efdcf1bd-9429-4f34-875b-108227cf4b61 req-54174cea-6b48-40b5-9350-c9e9d233919e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.242 2 DEBUG oslo_concurrency.processutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.242 2 DEBUG nova.virt.disk.api [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Cannot resize image /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.243 2 DEBUG nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.243 2 DEBUG nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Ensure instance console log exists: /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.244 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.244 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.245 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.736 2 DEBUG oslo_concurrency.lockutils [req-efdcf1bd-9429-4f34-875b-108227cf4b61 req-54174cea-6b48-40b5-9350-c9e9d233919e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-144cc4ab-6004-4896-8899-8860557341b0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.736 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Acquired lock "refresh_cache-144cc4ab-6004-4896-8899-8860557341b0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:29:17 compute-0 nova_compute[117331]: 2025-10-09 16:29:17.737 2 DEBUG nova.network.neutron [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:29:17 compute-0 podman[148895]: 2025-10-09 16:29:17.860853798 +0000 UTC m=+0.085065618 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:29:19 compute-0 nova_compute[117331]: 2025-10-09 16:29:19.102 2 DEBUG nova.network.neutron [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:29:19 compute-0 sshd-session[148858]: Connection closed by 36.224.53.32 port 45868 [preauth]
Oct 09 16:29:20 compute-0 nova_compute[117331]: 2025-10-09 16:29:20.142 2 WARNING neutronclient.v2_0.client [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:29:20 compute-0 ovn_controller[19752]: 2025-10-09T16:29:20Z|00183|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.079 2 DEBUG nova.network.neutron [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Updating instance_info_cache with network_info: [{"id": "1e79672b-1776-4292-b861-3666e6b4dc69", "address": "fa:16:3e:de:0a:d8", "network": {"id": "84247d99-b9fb-4f75-af06-dd3e92557a34", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-187667545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f838d4a2de74d8fbb8e91e7ef351b24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e79672b-17", "ovs_interfaceid": "1e79672b-1776-4292-b861-3666e6b4dc69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.589 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Releasing lock "refresh_cache-144cc4ab-6004-4896-8899-8860557341b0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.590 2 DEBUG nova.compute.manager [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Instance network_info: |[{"id": "1e79672b-1776-4292-b861-3666e6b4dc69", "address": "fa:16:3e:de:0a:d8", "network": {"id": "84247d99-b9fb-4f75-af06-dd3e92557a34", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-187667545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f838d4a2de74d8fbb8e91e7ef351b24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e79672b-17", "ovs_interfaceid": "1e79672b-1776-4292-b861-3666e6b4dc69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.592 2 DEBUG nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Start _get_guest_xml network_info=[{"id": "1e79672b-1776-4292-b861-3666e6b4dc69", "address": "fa:16:3e:de:0a:d8", "network": {"id": "84247d99-b9fb-4f75-af06-dd3e92557a34", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-187667545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f838d4a2de74d8fbb8e91e7ef351b24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e79672b-17", "ovs_interfaceid": "1e79672b-1776-4292-b861-3666e6b4dc69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.597 2 WARNING nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.598 2 DEBUG nova.virt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-1280932969', uuid='144cc4ab-6004-4896-8899-8860557341b0'), owner=OwnerMeta(userid='ea71b67fc37c45859db08bb5231d7a56', username='tempest-TestExecuteVmWorkloadBalanceStrategy-1125458264-project-admin', projectid='c9d397897ee84f2e91c7e33f6c4052a3', projectname='tempest-TestExecuteVmWorkloadBalanceStrategy-1125458264'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "1e79672b-1776-4292-b861-3666e6b4dc69", "address": "fa:16:3e:de:0a:d8", "network": {"id": "84247d99-b9fb-4f75-af06-dd3e92557a34", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-187667545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f838d4a2de74d8fbb8e91e7ef351b24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e79672b-17", "ovs_interfaceid": "1e79672b-1776-4292-b861-3666e6b4dc69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760027361.5988011) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.604 2 DEBUG nova.virt.libvirt.host [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.604 2 DEBUG nova.virt.libvirt.host [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.607 2 DEBUG nova.virt.libvirt.host [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.608 2 DEBUG nova.virt.libvirt.host [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.608 2 DEBUG nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.609 2 DEBUG nova.virt.hardware [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.609 2 DEBUG nova.virt.hardware [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.609 2 DEBUG nova.virt.hardware [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.610 2 DEBUG nova.virt.hardware [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.610 2 DEBUG nova.virt.hardware [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.610 2 DEBUG nova.virt.hardware [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.610 2 DEBUG nova.virt.hardware [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.611 2 DEBUG nova.virt.hardware [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.611 2 DEBUG nova.virt.hardware [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.611 2 DEBUG nova.virt.hardware [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.611 2 DEBUG nova.virt.hardware [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.615 2 DEBUG nova.virt.libvirt.vif [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:29:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-1280932969',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutevmworkloadbalancestrategy-server-1280932969',id=20,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c9d397897ee84f2e91c7e33f6c4052a3',ramdisk_id='',reservation_id='r-5y8wuj64',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteVmWorkloadBalanceStrategy-1125458264',owner_user_name='tempest-TestExecuteVmWorkloadBalanceStrategy-1125458264-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:29:15Z,user_data=None,user_id='ea71b67fc37c45859db08bb5231d7a56',uuid=144cc4ab-6004-4896-8899-8860557341b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e79672b-1776-4292-b861-3666e6b4dc69", "address": "fa:16:3e:de:0a:d8", "network": {"id": "84247d99-b9fb-4f75-af06-dd3e92557a34", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-187667545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f838d4a2de74d8fbb8e91e7ef351b24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e79672b-17", "ovs_interfaceid": "1e79672b-1776-4292-b861-3666e6b4dc69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.616 2 DEBUG nova.network.os_vif_util [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Converting VIF {"id": "1e79672b-1776-4292-b861-3666e6b4dc69", "address": "fa:16:3e:de:0a:d8", "network": {"id": "84247d99-b9fb-4f75-af06-dd3e92557a34", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-187667545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f838d4a2de74d8fbb8e91e7ef351b24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e79672b-17", "ovs_interfaceid": "1e79672b-1776-4292-b861-3666e6b4dc69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.617 2 DEBUG nova.network.os_vif_util [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:0a:d8,bridge_name='br-int',has_traffic_filtering=True,id=1e79672b-1776-4292-b861-3666e6b4dc69,network=Network(84247d99-b9fb-4f75-af06-dd3e92557a34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e79672b-17') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.617 2 DEBUG nova.objects.instance [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 144cc4ab-6004-4896-8899-8860557341b0 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:21 compute-0 nova_compute[117331]: 2025-10-09 16:29:21.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.124 2 DEBUG nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:29:22 compute-0 nova_compute[117331]:   <uuid>144cc4ab-6004-4896-8899-8860557341b0</uuid>
Oct 09 16:29:22 compute-0 nova_compute[117331]:   <name>instance-00000014</name>
Oct 09 16:29:22 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:29:22 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:29:22 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteVmWorkloadBalanceStrategy-server-1280932969</nova:name>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:29:21</nova:creationTime>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:29:22 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:29:22 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:29:22 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:29:22 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:29:22 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:29:22 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:29:22 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:29:22 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:29:22 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:29:22 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:29:22 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:29:22 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:29:22 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:29:22 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:29:22 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:29:22 compute-0 nova_compute[117331]:         <nova:user uuid="ea71b67fc37c45859db08bb5231d7a56">tempest-TestExecuteVmWorkloadBalanceStrategy-1125458264-project-admin</nova:user>
Oct 09 16:29:22 compute-0 nova_compute[117331]:         <nova:project uuid="c9d397897ee84f2e91c7e33f6c4052a3">tempest-TestExecuteVmWorkloadBalanceStrategy-1125458264</nova:project>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:29:22 compute-0 nova_compute[117331]:         <nova:port uuid="1e79672b-1776-4292-b861-3666e6b4dc69">
Oct 09 16:29:22 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:29:22 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:29:22 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <system>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <entry name="serial">144cc4ab-6004-4896-8899-8860557341b0</entry>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <entry name="uuid">144cc4ab-6004-4896-8899-8860557341b0</entry>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     </system>
Oct 09 16:29:22 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:29:22 compute-0 nova_compute[117331]:   <os>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:   </os>
Oct 09 16:29:22 compute-0 nova_compute[117331]:   <features>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:   </features>
Oct 09 16:29:22 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:29:22 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:29:22 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk.config"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:de:0a:d8"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <target dev="tap1e79672b-17"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/console.log" append="off"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <video>
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     </video>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:29:22 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:29:22 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:29:22 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:29:22 compute-0 nova_compute[117331]: </domain>
Oct 09 16:29:22 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.126 2 DEBUG nova.compute.manager [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Preparing to wait for external event network-vif-plugged-1e79672b-1776-4292-b861-3666e6b4dc69 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.127 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Acquiring lock "144cc4ab-6004-4896-8899-8860557341b0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.127 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "144cc4ab-6004-4896-8899-8860557341b0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.127 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "144cc4ab-6004-4896-8899-8860557341b0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.128 2 DEBUG nova.virt.libvirt.vif [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:29:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-1280932969',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutevmworkloadbalancestrategy-server-1280932969',id=20,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c9d397897ee84f2e91c7e33f6c4052a3',ramdisk_id='',reservation_id='r-5y8wuj64',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteVmWorkloadBalanceStrategy-1125458264',owner_user_name='tempest-TestExecuteVmWorkloadBalanceStrategy-1125458264-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:29:15Z,user_data=None,user_id='ea71b67fc37c45859db08bb5231d7a56',uuid=144cc4ab-6004-4896-8899-8860557341b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e79672b-1776-4292-b861-3666e6b4dc69", "address": "fa:16:3e:de:0a:d8", "network": {"id": "84247d99-b9fb-4f75-af06-dd3e92557a34", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-187667545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f838d4a2de74d8fbb8e91e7ef351b24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e79672b-17", "ovs_interfaceid": "1e79672b-1776-4292-b861-3666e6b4dc69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.128 2 DEBUG nova.network.os_vif_util [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Converting VIF {"id": "1e79672b-1776-4292-b861-3666e6b4dc69", "address": "fa:16:3e:de:0a:d8", "network": {"id": "84247d99-b9fb-4f75-af06-dd3e92557a34", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-187667545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f838d4a2de74d8fbb8e91e7ef351b24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e79672b-17", "ovs_interfaceid": "1e79672b-1776-4292-b861-3666e6b4dc69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.129 2 DEBUG nova.network.os_vif_util [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:0a:d8,bridge_name='br-int',has_traffic_filtering=True,id=1e79672b-1776-4292-b861-3666e6b4dc69,network=Network(84247d99-b9fb-4f75-af06-dd3e92557a34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e79672b-17') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.130 2 DEBUG os_vif [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:0a:d8,bridge_name='br-int',has_traffic_filtering=True,id=1e79672b-1776-4292-b861-3666e6b4dc69,network=Network(84247d99-b9fb-4f75-af06-dd3e92557a34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e79672b-17') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.132 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '7553ccf3-52cb-5cb9-871b-e0cf9275863b', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.137 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e79672b-17, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.138 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap1e79672b-17, col_values=(('qos', UUID('c3de9522-a440-4d52-910d-396d4d624273')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.138 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap1e79672b-17, col_values=(('external_ids', {'iface-id': '1e79672b-1776-4292-b861-3666e6b4dc69', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:0a:d8', 'vm-uuid': '144cc4ab-6004-4896-8899-8860557341b0'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:22 compute-0 NetworkManager[1028]: <info>  [1760027362.1406] manager: (tap1e79672b-17): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.148 2 INFO os_vif [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:0a:d8,bridge_name='br-int',has_traffic_filtering=True,id=1e79672b-1776-4292-b861-3666e6b4dc69,network=Network(84247d99-b9fb-4f75-af06-dd3e92557a34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e79672b-17')
Oct 09 16:29:22 compute-0 nova_compute[117331]: 2025-10-09 16:29:22.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:29:23 compute-0 nova_compute[117331]: 2025-10-09 16:29:23.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:29:23 compute-0 nova_compute[117331]: 2025-10-09 16:29:23.708 2 DEBUG nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:29:23 compute-0 nova_compute[117331]: 2025-10-09 16:29:23.708 2 DEBUG nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:29:23 compute-0 nova_compute[117331]: 2025-10-09 16:29:23.708 2 DEBUG nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] No VIF found with MAC fa:16:3e:de:0a:d8, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:29:23 compute-0 nova_compute[117331]: 2025-10-09 16:29:23.709 2 INFO nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Using config drive
Oct 09 16:29:24 compute-0 nova_compute[117331]: 2025-10-09 16:29:24.225 2 WARNING neutronclient.v2_0.client [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:29:24 compute-0 nova_compute[117331]: 2025-10-09 16:29:24.367 2 INFO nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Creating config drive at /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk.config
Oct 09 16:29:24 compute-0 nova_compute[117331]: 2025-10-09 16:29:24.373 2 DEBUG oslo_concurrency.processutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpi53fdo_e execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:29:24 compute-0 sshd-session[148919]: Invalid user www from 36.224.53.32 port 54964
Oct 09 16:29:24 compute-0 nova_compute[117331]: 2025-10-09 16:29:24.511 2 DEBUG oslo_concurrency.processutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpi53fdo_e" returned: 0 in 0.138s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:29:24 compute-0 podman[148926]: 2025-10-09 16:29:24.575454483 +0000 UTC m=+0.049595489 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:29:24 compute-0 kernel: tap1e79672b-17: entered promiscuous mode
Oct 09 16:29:24 compute-0 NetworkManager[1028]: <info>  [1760027364.5832] manager: (tap1e79672b-17): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Oct 09 16:29:24 compute-0 nova_compute[117331]: 2025-10-09 16:29:24.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:24 compute-0 ovn_controller[19752]: 2025-10-09T16:29:24Z|00184|binding|INFO|Claiming lport 1e79672b-1776-4292-b861-3666e6b4dc69 for this chassis.
Oct 09 16:29:24 compute-0 ovn_controller[19752]: 2025-10-09T16:29:24Z|00185|binding|INFO|1e79672b-1776-4292-b861-3666e6b4dc69: Claiming fa:16:3e:de:0a:d8 10.100.0.8
Oct 09 16:29:24 compute-0 nova_compute[117331]: 2025-10-09 16:29:24.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.601 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:0a:d8 10.100.0.8'], port_security=['fa:16:3e:de:0a:d8 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '144cc4ab-6004-4896-8899-8860557341b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84247d99-b9fb-4f75-af06-dd3e92557a34', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9d397897ee84f2e91c7e33f6c4052a3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2f00962f-5e5e-41c9-9e03-af9a8b3a3c40', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83685448-9277-4bf7-9118-ff77ca4cb703, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=1e79672b-1776-4292-b861-3666e6b4dc69) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.602 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 1e79672b-1776-4292-b861-3666e6b4dc69 in datapath 84247d99-b9fb-4f75-af06-dd3e92557a34 bound to our chassis
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.604 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84247d99-b9fb-4f75-af06-dd3e92557a34
Oct 09 16:29:24 compute-0 systemd-udevd[148980]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.620 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[67cb685f-a863-4be7-be76-1d6c18c76462]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.621 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap84247d99-b1 in ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.622 139687 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap84247d99-b0 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.622 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ea10c7-46a8-4ec3-a55a-e9a9a4425486]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.623 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[397c2df2-6f8a-4b17-89ef-3c263f10f676]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:29:24 compute-0 podman[148927]: 2025-10-09 16:29:24.624203652 +0000 UTC m=+0.087878897 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid)
Oct 09 16:29:24 compute-0 systemd-machined[77487]: New machine qemu-14-instance-00000014.
Oct 09 16:29:24 compute-0 NetworkManager[1028]: <info>  [1760027364.6334] device (tap1e79672b-17): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.632 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[2347b83b-44e6-49e3-b6fa-918d27164462]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:29:24 compute-0 NetworkManager[1028]: <info>  [1760027364.6345] device (tap1e79672b-17): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:29:24 compute-0 nova_compute[117331]: 2025-10-09 16:29:24.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:24 compute-0 ovn_controller[19752]: 2025-10-09T16:29:24Z|00186|binding|INFO|Setting lport 1e79672b-1776-4292-b861-3666e6b4dc69 ovn-installed in OVS
Oct 09 16:29:24 compute-0 ovn_controller[19752]: 2025-10-09T16:29:24Z|00187|binding|INFO|Setting lport 1e79672b-1776-4292-b861-3666e6b4dc69 up in Southbound
Oct 09 16:29:24 compute-0 nova_compute[117331]: 2025-10-09 16:29:24.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.653 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[01fd1e85-f7bc-40ba-85d6-531388991b55]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:29:24 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-00000014.
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.678 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[4f24e925-eca2-489b-bd2c-e3de149796f8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.682 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ff860e6d-c6ef-4381-b98d-561de32a1b9c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:29:24 compute-0 NetworkManager[1028]: <info>  [1760027364.6831] manager: (tap84247d99-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/70)
Oct 09 16:29:24 compute-0 systemd-udevd[148986]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.711 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[0543599e-8966-4fa4-bb0d-780e525703af]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.713 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a44176-5201-468a-a2a5-d5e696ab99aa]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:29:24 compute-0 NetworkManager[1028]: <info>  [1760027364.7310] device (tap84247d99-b0): carrier: link connected
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.734 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[fce7ae1e-8608-4d6a-bb53-d346d505b79d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.747 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f792b2-6c3f-4e4a-b92a-b4754c982eec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84247d99-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:14:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 230833, 'reachable_time': 43347, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 149016, 'error': None, 'target': 'ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.757 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[f816549f-0e82-4e7e-b21a-d2fee641b9c9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe10:14aa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 230833, 'tstamp': 230833}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 149017, 'error': None, 'target': 'ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.768 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[2219a31e-fb75-43c6-8cdc-f0c05d26bab3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84247d99-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:14:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 230833, 'reachable_time': 43347, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 149018, 'error': None, 'target': 'ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.791 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[6c226426-559e-409b-ac10-3a979107f8d7]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:29:24 compute-0 nova_compute[117331]: 2025-10-09 16:29:24.811 2 DEBUG nova.compute.manager [req-2575dc7d-621e-4a31-b0b6-9896622d7ff8 req-6d6fa26a-8a94-431a-b259-a706bf538324 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Received event network-vif-plugged-1e79672b-1776-4292-b861-3666e6b4dc69 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:29:24 compute-0 nova_compute[117331]: 2025-10-09 16:29:24.811 2 DEBUG oslo_concurrency.lockutils [req-2575dc7d-621e-4a31-b0b6-9896622d7ff8 req-6d6fa26a-8a94-431a-b259-a706bf538324 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "144cc4ab-6004-4896-8899-8860557341b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:29:24 compute-0 nova_compute[117331]: 2025-10-09 16:29:24.812 2 DEBUG oslo_concurrency.lockutils [req-2575dc7d-621e-4a31-b0b6-9896622d7ff8 req-6d6fa26a-8a94-431a-b259-a706bf538324 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "144cc4ab-6004-4896-8899-8860557341b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:29:24 compute-0 nova_compute[117331]: 2025-10-09 16:29:24.812 2 DEBUG oslo_concurrency.lockutils [req-2575dc7d-621e-4a31-b0b6-9896622d7ff8 req-6d6fa26a-8a94-431a-b259-a706bf538324 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "144cc4ab-6004-4896-8899-8860557341b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:29:24 compute-0 nova_compute[117331]: 2025-10-09 16:29:24.812 2 DEBUG nova.compute.manager [req-2575dc7d-621e-4a31-b0b6-9896622d7ff8 req-6d6fa26a-8a94-431a-b259-a706bf538324 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Processing event network-vif-plugged-1e79672b-1776-4292-b861-3666e6b4dc69 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.848 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d43cfc-ff6c-4ae5-ba6c-2e79106297bf]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.850 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84247d99-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.850 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.850 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84247d99-b0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:29:24 compute-0 kernel: tap84247d99-b0: entered promiscuous mode
Oct 09 16:29:24 compute-0 NetworkManager[1028]: <info>  [1760027364.8525] manager: (tap84247d99-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.854 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84247d99-b0, col_values=(('external_ids', {'iface-id': '091675d2-c61b-4a23-b064-d4ca0295fca5'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.857 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[edbea22f-8f4e-47cb-aa42-768d2fb569fe]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:29:24 compute-0 ovn_controller[19752]: 2025-10-09T16:29:24Z|00188|binding|INFO|Releasing lport 091675d2-c61b-4a23-b064-d4ca0295fca5 from this chassis (sb_readonly=0)
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.858 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84247d99-b9fb-4f75-af06-dd3e92557a34.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84247d99-b9fb-4f75-af06-dd3e92557a34.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.858 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84247d99-b9fb-4f75-af06-dd3e92557a34.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84247d99-b9fb-4f75-af06-dd3e92557a34.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.858 28613 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 84247d99-b9fb-4f75-af06-dd3e92557a34 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.859 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84247d99-b9fb-4f75-af06-dd3e92557a34.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84247d99-b9fb-4f75-af06-dd3e92557a34.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.859 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a96b85de-e5ea-4f0e-ad66-c8dad3de20b4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:29:24 compute-0 nova_compute[117331]: 2025-10-09 16:29:24.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.859 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84247d99-b9fb-4f75-af06-dd3e92557a34.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84247d99-b9fb-4f75-af06-dd3e92557a34.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.859 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[44299b2c-cfd3-40b7-b9aa-7b6fc0f25d63]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.860 28613 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: global
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     log         /dev/log local0 debug
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     log-tag     haproxy-metadata-proxy-84247d99-b9fb-4f75-af06-dd3e92557a34
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     user        root
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     group       root
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     maxconn     1024
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     pidfile     /var/lib/neutron/external/pids/84247d99-b9fb-4f75-af06-dd3e92557a34.pid.haproxy
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     daemon
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: defaults
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     log global
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     mode http
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     option httplog
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     option dontlognull
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     option http-server-close
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     option forwardfor
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     retries                 3
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     timeout http-request    30s
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     timeout connect         30s
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     timeout client          32s
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     timeout server          32s
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     timeout http-keep-alive 30s
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: listen listener
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     bind 169.254.169.254:80
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:     http-request add-header X-OVN-Network-ID 84247d99-b9fb-4f75-af06-dd3e92557a34
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 09 16:29:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:24.860 28613 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34', 'env', 'PROCESS_TAG=haproxy-84247d99-b9fb-4f75-af06-dd3e92557a34', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/84247d99-b9fb-4f75-af06-dd3e92557a34.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 09 16:29:24 compute-0 nova_compute[117331]: 2025-10-09 16:29:24.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:25 compute-0 podman[149056]: 2025-10-09 16:29:25.251757196 +0000 UTC m=+0.051552139 container create f05642e7a9cb4e79d64d2d1dd00dd50188be0c698fc2836314fcedb587c79c63 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 09 16:29:25 compute-0 systemd[1]: Started libpod-conmon-f05642e7a9cb4e79d64d2d1dd00dd50188be0c698fc2836314fcedb587c79c63.scope.
Oct 09 16:29:25 compute-0 nova_compute[117331]: 2025-10-09 16:29:25.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:29:25 compute-0 nova_compute[117331]: 2025-10-09 16:29:25.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:29:25 compute-0 nova_compute[117331]: 2025-10-09 16:29:25.308 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:29:25 compute-0 podman[149056]: 2025-10-09 16:29:25.224510356 +0000 UTC m=+0.024305319 image pull 4e15c189a95c5dcd1203c76fe304de5c8f8616da9a258ca168cc54a517f49b87 38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 09 16:29:25 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5662258e0c0a0d8b044a81e74030053f016eba96b96ec3641000824b1b2c0068/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 16:29:25 compute-0 podman[149056]: 2025-10-09 16:29:25.339475576 +0000 UTC m=+0.139270539 container init f05642e7a9cb4e79d64d2d1dd00dd50188be0c698fc2836314fcedb587c79c63 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:29:25 compute-0 podman[149056]: 2025-10-09 16:29:25.344524525 +0000 UTC m=+0.144319468 container start f05642e7a9cb4e79d64d2d1dd00dd50188be0c698fc2836314fcedb587c79c63 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 09 16:29:25 compute-0 neutron-haproxy-ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34[149073]: [NOTICE]   (149077) : New worker (149079) forked
Oct 09 16:29:25 compute-0 neutron-haproxy-ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34[149073]: [NOTICE]   (149077) : Loading success.
Oct 09 16:29:25 compute-0 nova_compute[117331]: 2025-10-09 16:29:25.629 2 DEBUG nova.compute.manager [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:29:25 compute-0 nova_compute[117331]: 2025-10-09 16:29:25.636 2 DEBUG nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:29:25 compute-0 nova_compute[117331]: 2025-10-09 16:29:25.640 2 INFO nova.virt.libvirt.driver [-] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Instance spawned successfully.
Oct 09 16:29:25 compute-0 nova_compute[117331]: 2025-10-09 16:29:25.640 2 DEBUG nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:29:25 compute-0 nova_compute[117331]: 2025-10-09 16:29:25.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:29:25 compute-0 nova_compute[117331]: 2025-10-09 16:29:25.821 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:29:25 compute-0 nova_compute[117331]: 2025-10-09 16:29:25.821 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:29:25 compute-0 nova_compute[117331]: 2025-10-09 16:29:25.821 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:29:25 compute-0 sshd-session[148919]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:29:25 compute-0 sshd-session[148919]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:29:26 compute-0 nova_compute[117331]: 2025-10-09 16:29:26.153 2 DEBUG nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:29:26 compute-0 nova_compute[117331]: 2025-10-09 16:29:26.153 2 DEBUG nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:29:26 compute-0 nova_compute[117331]: 2025-10-09 16:29:26.154 2 DEBUG nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:29:26 compute-0 nova_compute[117331]: 2025-10-09 16:29:26.154 2 DEBUG nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:29:26 compute-0 nova_compute[117331]: 2025-10-09 16:29:26.155 2 DEBUG nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:29:26 compute-0 nova_compute[117331]: 2025-10-09 16:29:26.155 2 DEBUG nova.virt.libvirt.driver [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:29:26 compute-0 nova_compute[117331]: 2025-10-09 16:29:26.664 2 INFO nova.compute.manager [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Took 9.78 seconds to spawn the instance on the hypervisor.
Oct 09 16:29:26 compute-0 nova_compute[117331]: 2025-10-09 16:29:26.664 2 DEBUG nova.compute.manager [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:29:26 compute-0 nova_compute[117331]: 2025-10-09 16:29:26.855 2 DEBUG nova.compute.manager [req-2c877027-e7f8-4adb-9a17-59d26b333689 req-ed96917a-287c-4c29-9ac4-ecfe183fed6d ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Received event network-vif-plugged-1e79672b-1776-4292-b861-3666e6b4dc69 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:29:26 compute-0 nova_compute[117331]: 2025-10-09 16:29:26.855 2 DEBUG oslo_concurrency.lockutils [req-2c877027-e7f8-4adb-9a17-59d26b333689 req-ed96917a-287c-4c29-9ac4-ecfe183fed6d ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "144cc4ab-6004-4896-8899-8860557341b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:29:26 compute-0 nova_compute[117331]: 2025-10-09 16:29:26.855 2 DEBUG oslo_concurrency.lockutils [req-2c877027-e7f8-4adb-9a17-59d26b333689 req-ed96917a-287c-4c29-9ac4-ecfe183fed6d ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "144cc4ab-6004-4896-8899-8860557341b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:29:26 compute-0 nova_compute[117331]: 2025-10-09 16:29:26.856 2 DEBUG oslo_concurrency.lockutils [req-2c877027-e7f8-4adb-9a17-59d26b333689 req-ed96917a-287c-4c29-9ac4-ecfe183fed6d ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "144cc4ab-6004-4896-8899-8860557341b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:29:26 compute-0 nova_compute[117331]: 2025-10-09 16:29:26.856 2 DEBUG nova.compute.manager [req-2c877027-e7f8-4adb-9a17-59d26b333689 req-ed96917a-287c-4c29-9ac4-ecfe183fed6d ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] No waiting events found dispatching network-vif-plugged-1e79672b-1776-4292-b861-3666e6b4dc69 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:29:26 compute-0 nova_compute[117331]: 2025-10-09 16:29:26.856 2 WARNING nova.compute.manager [req-2c877027-e7f8-4adb-9a17-59d26b333689 req-ed96917a-287c-4c29-9ac4-ecfe183fed6d ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Received unexpected event network-vif-plugged-1e79672b-1776-4292-b861-3666e6b4dc69 for instance with vm_state active and task_state None.
Oct 09 16:29:26 compute-0 nova_compute[117331]: 2025-10-09 16:29:26.872 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:29:26 compute-0 nova_compute[117331]: 2025-10-09 16:29:26.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:26 compute-0 nova_compute[117331]: 2025-10-09 16:29:26.947 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:29:26 compute-0 nova_compute[117331]: 2025-10-09 16:29:26.948 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:29:27 compute-0 nova_compute[117331]: 2025-10-09 16:29:27.014 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:29:27 compute-0 nova_compute[117331]: 2025-10-09 16:29:27.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:27 compute-0 nova_compute[117331]: 2025-10-09 16:29:27.173 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:29:27 compute-0 nova_compute[117331]: 2025-10-09 16:29:27.174 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:29:27 compute-0 nova_compute[117331]: 2025-10-09 16:29:27.195 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.021s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:29:27 compute-0 nova_compute[117331]: 2025-10-09 16:29:27.195 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6068MB free_disk=73.2566146850586GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:29:27 compute-0 nova_compute[117331]: 2025-10-09 16:29:27.196 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:29:27 compute-0 nova_compute[117331]: 2025-10-09 16:29:27.196 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:29:27 compute-0 nova_compute[117331]: 2025-10-09 16:29:27.197 2 INFO nova.compute.manager [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Took 15.03 seconds to build instance.
Oct 09 16:29:27 compute-0 nova_compute[117331]: 2025-10-09 16:29:27.703 2 DEBUG oslo_concurrency.lockutils [None req-65339b89-d5c9-40a2-9f50-b040cf00cb5f ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "144cc4ab-6004-4896-8899-8860557341b0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.554s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:29:27 compute-0 sshd-session[148919]: Failed password for invalid user www from 36.224.53.32 port 54964 ssh2
Oct 09 16:29:28 compute-0 sshd-session[148919]: Connection closed by invalid user www 36.224.53.32 port 54964 [preauth]
Oct 09 16:29:28 compute-0 nova_compute[117331]: 2025-10-09 16:29:28.255 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance 144cc4ab-6004-4896-8899-8860557341b0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:29:28 compute-0 nova_compute[117331]: 2025-10-09 16:29:28.256 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:29:28 compute-0 nova_compute[117331]: 2025-10-09 16:29:28.256 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:29:27 up 38 min,  0 user,  load average: 0.42, 0.52, 0.42\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_c9d397897ee84f2e91c7e33f6c4052a3': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:29:28 compute-0 nova_compute[117331]: 2025-10-09 16:29:28.347 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:29:28 compute-0 nova_compute[117331]: 2025-10-09 16:29:28.861 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:29:29 compute-0 nova_compute[117331]: 2025-10-09 16:29:29.372 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:29:29 compute-0 nova_compute[117331]: 2025-10-09 16:29:29.373 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.177s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:29:29 compute-0 podman[127775]: time="2025-10-09T16:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:29:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:29:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3483 "" "Go-http-client/1.1"
Oct 09 16:29:31 compute-0 nova_compute[117331]: 2025-10-09 16:29:31.378 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:29:31 compute-0 nova_compute[117331]: 2025-10-09 16:29:31.379 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:29:31 compute-0 nova_compute[117331]: 2025-10-09 16:29:31.379 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:29:31 compute-0 nova_compute[117331]: 2025-10-09 16:29:31.379 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:29:31 compute-0 openstack_network_exporter[129925]: ERROR   16:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:29:31 compute-0 openstack_network_exporter[129925]: ERROR   16:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:29:31 compute-0 openstack_network_exporter[129925]: ERROR   16:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:29:31 compute-0 openstack_network_exporter[129925]: ERROR   16:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:29:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:29:31 compute-0 openstack_network_exporter[129925]: ERROR   16:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:29:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:29:31 compute-0 nova_compute[117331]: 2025-10-09 16:29:31.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:31 compute-0 sshd-session[149090]: Invalid user hadoop from 36.224.53.32 port 34482
Oct 09 16:29:32 compute-0 podman[149098]: 2025-10-09 16:29:32.025580531 +0000 UTC m=+0.066384737 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git)
Oct 09 16:29:32 compute-0 podman[149099]: 2025-10-09 16:29:32.09457311 +0000 UTC m=+0.122789088 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 09 16:29:32 compute-0 nova_compute[117331]: 2025-10-09 16:29:32.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:32 compute-0 sshd-session[149090]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:29:32 compute-0 sshd-session[149090]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:29:35 compute-0 sshd-session[149090]: Failed password for invalid user hadoop from 36.224.53.32 port 34482 ssh2
Oct 09 16:29:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:35.321 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:29:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:35.322 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:29:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:29:35.322 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:29:36 compute-0 sshd-session[149090]: Connection closed by invalid user hadoop 36.224.53.32 port 34482 [preauth]
Oct 09 16:29:36 compute-0 sshd-session[149145]: Invalid user apache2 from 36.224.53.32 port 42014
Oct 09 16:29:36 compute-0 nova_compute[117331]: 2025-10-09 16:29:36.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:37 compute-0 nova_compute[117331]: 2025-10-09 16:29:37.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:37 compute-0 sshd-session[149145]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:29:37 compute-0 sshd-session[149145]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:29:37 compute-0 ovn_controller[19752]: 2025-10-09T16:29:37Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:0a:d8 10.100.0.8
Oct 09 16:29:37 compute-0 ovn_controller[19752]: 2025-10-09T16:29:37Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:0a:d8 10.100.0.8
Oct 09 16:29:39 compute-0 sshd-session[149145]: Failed password for invalid user apache2 from 36.224.53.32 port 42014 ssh2
Oct 09 16:29:40 compute-0 sshd-session[149145]: Connection closed by invalid user apache2 36.224.53.32 port 42014 [preauth]
Oct 09 16:29:41 compute-0 nova_compute[117331]: 2025-10-09 16:29:41.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:42 compute-0 nova_compute[117331]: 2025-10-09 16:29:42.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:43 compute-0 sshd-session[149165]: Invalid user advantech from 36.224.53.32 port 49238
Oct 09 16:29:44 compute-0 sshd-session[149165]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:29:44 compute-0 sshd-session[149165]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:29:46 compute-0 podman[149167]: 2025-10-09 16:29:46.855988583 +0000 UTC m=+0.082855217 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 09 16:29:46 compute-0 sshd-session[149165]: Failed password for invalid user advantech from 36.224.53.32 port 49238 ssh2
Oct 09 16:29:46 compute-0 nova_compute[117331]: 2025-10-09 16:29:46.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:47 compute-0 nova_compute[117331]: 2025-10-09 16:29:47.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:48 compute-0 sshd-session[149165]: Connection closed by invalid user advantech 36.224.53.32 port 49238 [preauth]
Oct 09 16:29:48 compute-0 podman[149189]: 2025-10-09 16:29:48.850432906 +0000 UTC m=+0.078387826 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 09 16:29:51 compute-0 nova_compute[117331]: 2025-10-09 16:29:51.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:52 compute-0 nova_compute[117331]: 2025-10-09 16:29:52.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:54 compute-0 podman[149213]: 2025-10-09 16:29:54.843952613 +0000 UTC m=+0.065778689 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Oct 09 16:29:54 compute-0 podman[149214]: 2025-10-09 16:29:54.854660271 +0000 UTC m=+0.077479077 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 09 16:29:55 compute-0 sshd-session[149187]: Connection closed by 36.224.53.32 port 56486 [preauth]
Oct 09 16:29:56 compute-0 nova_compute[117331]: 2025-10-09 16:29:56.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:57 compute-0 nova_compute[117331]: 2025-10-09 16:29:57.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:29:58 compute-0 sshd-session[149250]: Invalid user logstash from 36.224.53.32 port 36730
Oct 09 16:29:59 compute-0 sshd-session[149250]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:29:59 compute-0 sshd-session[149250]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:29:59 compute-0 podman[127775]: time="2025-10-09T16:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:29:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:29:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3491 "" "Go-http-client/1.1"
Oct 09 16:30:01 compute-0 openstack_network_exporter[129925]: ERROR   16:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:30:01 compute-0 openstack_network_exporter[129925]: ERROR   16:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:30:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:30:01 compute-0 openstack_network_exporter[129925]: ERROR   16:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:30:01 compute-0 openstack_network_exporter[129925]: ERROR   16:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:30:01 compute-0 openstack_network_exporter[129925]: ERROR   16:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:30:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:30:01 compute-0 systemd[1]: Starting system activity accounting tool...
Oct 09 16:30:01 compute-0 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 09 16:30:01 compute-0 systemd[1]: Finished system activity accounting tool.
Oct 09 16:30:01 compute-0 sshd-session[149250]: Failed password for invalid user logstash from 36.224.53.32 port 36730 ssh2
Oct 09 16:30:01 compute-0 nova_compute[117331]: 2025-10-09 16:30:01.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:02 compute-0 nova_compute[117331]: 2025-10-09 16:30:02.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:02 compute-0 sshd-session[149250]: Connection closed by invalid user logstash 36.224.53.32 port 36730 [preauth]
Oct 09 16:30:02 compute-0 podman[149255]: 2025-10-09 16:30:02.854777234 +0000 UTC m=+0.084754726 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, release=1755695350)
Oct 09 16:30:02 compute-0 podman[149256]: 2025-10-09 16:30:02.893051053 +0000 UTC m=+0.112826304 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 09 16:30:04 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 09 16:30:06 compute-0 unix_chkpwd[149305]: password check failed for user (games)
Oct 09 16:30:06 compute-0 sshd-session[149254]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32  user=games
Oct 09 16:30:06 compute-0 nova_compute[117331]: 2025-10-09 16:30:06.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:07 compute-0 nova_compute[117331]: 2025-10-09 16:30:07.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:08 compute-0 sshd-session[149254]: Failed password for games from 36.224.53.32 port 43952 ssh2
Oct 09 16:30:10 compute-0 sshd-session[149254]: Connection closed by authenticating user games 36.224.53.32 port 43952 [preauth]
Oct 09 16:30:11 compute-0 nova_compute[117331]: 2025-10-09 16:30:11.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:12 compute-0 nova_compute[117331]: 2025-10-09 16:30:12.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:15 compute-0 ovn_controller[19752]: 2025-10-09T16:30:15Z|00189|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Oct 09 16:30:16 compute-0 nova_compute[117331]: 2025-10-09 16:30:16.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:17 compute-0 nova_compute[117331]: 2025-10-09 16:30:17.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:17 compute-0 podman[149308]: 2025-10-09 16:30:17.82748781 +0000 UTC m=+0.061710990 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 16:30:18 compute-0 sshd-session[149306]: Connection closed by authenticating user cloud-user 36.224.53.32 port 51200 [preauth]
Oct 09 16:30:19 compute-0 nova_compute[117331]: 2025-10-09 16:30:19.656 2 DEBUG nova.virt.libvirt.driver [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Creating tmpfile /var/lib/nova/instances/tmpzzvfvq4l to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Oct 09 16:30:19 compute-0 nova_compute[117331]: 2025-10-09 16:30:19.657 2 WARNING neutronclient.v2_0.client [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:30:19 compute-0 nova_compute[117331]: 2025-10-09 16:30:19.741 2 DEBUG nova.compute.manager [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpzzvfvq4l',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Oct 09 16:30:19 compute-0 podman[149331]: 2025-10-09 16:30:19.853487328 +0000 UTC m=+0.083284911 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 09 16:30:21 compute-0 sshd-session[149329]: Invalid user mcserver from 36.224.53.32 port 33482
Oct 09 16:30:21 compute-0 nova_compute[117331]: 2025-10-09 16:30:21.773 2 WARNING neutronclient.v2_0.client [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:30:21 compute-0 nova_compute[117331]: 2025-10-09 16:30:21.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:22 compute-0 nova_compute[117331]: 2025-10-09 16:30:22.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:22 compute-0 nova_compute[117331]: 2025-10-09 16:30:22.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:30:22 compute-0 sshd-session[149329]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:30:22 compute-0 sshd-session[149329]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:30:24 compute-0 sshd-session[149329]: Failed password for invalid user mcserver from 36.224.53.32 port 33482 ssh2
Oct 09 16:30:25 compute-0 nova_compute[117331]: 2025-10-09 16:30:25.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:30:25 compute-0 nova_compute[117331]: 2025-10-09 16:30:25.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:30:25 compute-0 nova_compute[117331]: 2025-10-09 16:30:25.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:30:25 compute-0 sshd-session[149329]: Connection closed by invalid user mcserver 36.224.53.32 port 33482 [preauth]
Oct 09 16:30:25 compute-0 podman[149357]: 2025-10-09 16:30:25.857141816 +0000 UTC m=+0.068537625 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 16:30:25 compute-0 podman[149358]: 2025-10-09 16:30:25.867250395 +0000 UTC m=+0.076039631 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:30:26 compute-0 nova_compute[117331]: 2025-10-09 16:30:26.048 2 DEBUG nova.compute.manager [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpzzvfvq4l',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='61ff0a09-3e7d-4597-9e71-af032c6a774f',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Oct 09 16:30:27 compute-0 nova_compute[117331]: 2025-10-09 16:30:27.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:27 compute-0 nova_compute[117331]: 2025-10-09 16:30:27.060 2 DEBUG oslo_concurrency.lockutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-61ff0a09-3e7d-4597-9e71-af032c6a774f" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:30:27 compute-0 nova_compute[117331]: 2025-10-09 16:30:27.061 2 DEBUG oslo_concurrency.lockutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-61ff0a09-3e7d-4597-9e71-af032c6a774f" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:30:27 compute-0 nova_compute[117331]: 2025-10-09 16:30:27.061 2 DEBUG nova.network.neutron [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:30:27 compute-0 nova_compute[117331]: 2025-10-09 16:30:27.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:27 compute-0 nova_compute[117331]: 2025-10-09 16:30:27.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:30:27 compute-0 nova_compute[117331]: 2025-10-09 16:30:27.584 2 WARNING neutronclient.v2_0.client [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:30:27 compute-0 nova_compute[117331]: 2025-10-09 16:30:27.819 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:30:27 compute-0 nova_compute[117331]: 2025-10-09 16:30:27.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:30:27 compute-0 nova_compute[117331]: 2025-10-09 16:30:27.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:30:27 compute-0 nova_compute[117331]: 2025-10-09 16:30:27.821 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:30:28 compute-0 nova_compute[117331]: 2025-10-09 16:30:28.016 2 WARNING neutronclient.v2_0.client [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:30:28 compute-0 nova_compute[117331]: 2025-10-09 16:30:28.170 2 DEBUG nova.network.neutron [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Updating instance_info_cache with network_info: [{"id": "2a45be1f-c21d-4446-9ec2-ecbb726e290e", "address": "fa:16:3e:f7:5a:4f", "network": {"id": "84247d99-b9fb-4f75-af06-dd3e92557a34", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-187667545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f838d4a2de74d8fbb8e91e7ef351b24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a45be1f-c2", "ovs_interfaceid": "2a45be1f-c21d-4446-9ec2-ecbb726e290e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:30:28 compute-0 nova_compute[117331]: 2025-10-09 16:30:28.677 2 DEBUG oslo_concurrency.lockutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-61ff0a09-3e7d-4597-9e71-af032c6a774f" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:30:28 compute-0 nova_compute[117331]: 2025-10-09 16:30:28.694 2 DEBUG nova.virt.libvirt.driver [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpzzvfvq4l',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='61ff0a09-3e7d-4597-9e71-af032c6a774f',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Oct 09 16:30:28 compute-0 nova_compute[117331]: 2025-10-09 16:30:28.695 2 DEBUG nova.virt.libvirt.driver [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Creating instance directory: /var/lib/nova/instances/61ff0a09-3e7d-4597-9e71-af032c6a774f pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Oct 09 16:30:28 compute-0 nova_compute[117331]: 2025-10-09 16:30:28.696 2 DEBUG nova.virt.libvirt.driver [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Creating disk.info with the contents: {'/var/lib/nova/instances/61ff0a09-3e7d-4597-9e71-af032c6a774f/disk': 'qcow2', '/var/lib/nova/instances/61ff0a09-3e7d-4597-9e71-af032c6a774f/disk.config': 'raw'} pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11764
Oct 09 16:30:28 compute-0 nova_compute[117331]: 2025-10-09 16:30:28.697 2 DEBUG nova.virt.libvirt.driver [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11774
Oct 09 16:30:28 compute-0 nova_compute[117331]: 2025-10-09 16:30:28.698 2 DEBUG nova.objects.instance [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 61ff0a09-3e7d-4597-9e71-af032c6a774f obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:30:28 compute-0 nova_compute[117331]: 2025-10-09 16:30:28.873 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:30:28 compute-0 nova_compute[117331]: 2025-10-09 16:30:28.962 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:30:28 compute-0 nova_compute[117331]: 2025-10-09 16:30:28.963 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.051 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.206 2 DEBUG oslo_utils.imageutils.format_inspector [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.209 2 DEBUG oslo_utils.imageutils.format_inspector [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.211 2 DEBUG oslo_concurrency.processutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.238 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.239 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.262 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.023s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.263 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5966MB free_disk=73.22819137573242GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.263 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.264 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.279 2 DEBUG oslo_concurrency.processutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.279 2 DEBUG oslo_concurrency.lockutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.280 2 DEBUG oslo_concurrency.lockutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.280 2 DEBUG oslo_utils.imageutils.format_inspector [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.283 2 DEBUG oslo_utils.imageutils.format_inspector [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.283 2 DEBUG oslo_concurrency.processutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.335 2 DEBUG oslo_concurrency.processutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.336 2 DEBUG oslo_concurrency.processutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/61ff0a09-3e7d-4597-9e71-af032c6a774f/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:30:29 compute-0 sshd-session[149356]: Invalid user odoo18 from 36.224.53.32 port 41038
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.374 2 DEBUG oslo_concurrency.processutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/61ff0a09-3e7d-4597-9e71-af032c6a774f/disk 1073741824" returned: 0 in 0.038s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.376 2 DEBUG oslo_concurrency.lockutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.096s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.376 2 DEBUG oslo_concurrency.processutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.431 2 DEBUG oslo_concurrency.processutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.432 2 DEBUG nova.virt.disk.api [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Checking if we can resize image /var/lib/nova/instances/61ff0a09-3e7d-4597-9e71-af032c6a774f/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.432 2 DEBUG oslo_concurrency.processutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/61ff0a09-3e7d-4597-9e71-af032c6a774f/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.482 2 DEBUG oslo_concurrency.processutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/61ff0a09-3e7d-4597-9e71-af032c6a774f/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.483 2 DEBUG nova.virt.disk.api [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Cannot resize image /var/lib/nova/instances/61ff0a09-3e7d-4597-9e71-af032c6a774f/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.484 2 DEBUG nova.objects.instance [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'migration_context' on Instance uuid 61ff0a09-3e7d-4597-9e71-af032c6a774f obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:30:29 compute-0 podman[127775]: time="2025-10-09T16:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:30:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:30:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3486 "" "Go-http-client/1.1"
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.994 2 DEBUG nova.objects.base [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Object Instance<61ff0a09-3e7d-4597-9e71-af032c6a774f> lazy-loaded attributes: trusted_certs,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 09 16:30:29 compute-0 nova_compute[117331]: 2025-10-09 16:30:29.995 2 DEBUG oslo_concurrency.processutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/61ff0a09-3e7d-4597-9e71-af032c6a774f/disk.config 497664 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.038 2 DEBUG oslo_concurrency.processutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/61ff0a09-3e7d-4597-9e71-af032c6a774f/disk.config 497664" returned: 0 in 0.044s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.040 2 DEBUG nova.virt.libvirt.driver [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.042 2 DEBUG nova.virt.libvirt.vif [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-09T16:29:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-513673060',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutevmworkloadbalancestrategy-server-513673060',id=21,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:29:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c9d397897ee84f2e91c7e33f6c4052a3',ramdisk_id='',reservation_id='r-ncyxwz27',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteVmWorkloadBalanceStrategy-1125458264',owner_user_name='tempest-TestExecuteVmWorkloadBalanceStrategy-1125458264-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:29:47Z,user_data=None,user_id='ea71b67fc37c45859db08bb5231d7a56',uuid=61ff0a09-3e7d-4597-9e71-af032c6a774f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a45be1f-c21d-4446-9ec2-ecbb726e290e", "address": "fa:16:3e:f7:5a:4f", "network": {"id": "84247d99-b9fb-4f75-af06-dd3e92557a34", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-187667545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f838d4a2de74d8fbb8e91e7ef351b24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2a45be1f-c2", "ovs_interfaceid": "2a45be1f-c21d-4446-9ec2-ecbb726e290e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.043 2 DEBUG nova.network.os_vif_util [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "2a45be1f-c21d-4446-9ec2-ecbb726e290e", "address": "fa:16:3e:f7:5a:4f", "network": {"id": "84247d99-b9fb-4f75-af06-dd3e92557a34", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-187667545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f838d4a2de74d8fbb8e91e7ef351b24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2a45be1f-c2", "ovs_interfaceid": "2a45be1f-c21d-4446-9ec2-ecbb726e290e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.044 2 DEBUG nova.network.os_vif_util [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:5a:4f,bridge_name='br-int',has_traffic_filtering=True,id=2a45be1f-c21d-4446-9ec2-ecbb726e290e,network=Network(84247d99-b9fb-4f75-af06-dd3e92557a34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a45be1f-c2') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.045 2 DEBUG os_vif [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:5a:4f,bridge_name='br-int',has_traffic_filtering=True,id=2a45be1f-c21d-4446-9ec2-ecbb726e290e,network=Network(84247d99-b9fb-4f75-af06-dd3e92557a34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a45be1f-c2') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.047 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '35a5a4d5-1580-5a39-9740-49708ccb9b07', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.058 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a45be1f-c2, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.059 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap2a45be1f-c2, col_values=(('qos', UUID('0ca7b11a-26fb-4513-994a-79c348018022')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.059 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap2a45be1f-c2, col_values=(('external_ids', {'iface-id': '2a45be1f-c21d-4446-9ec2-ecbb726e290e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:5a:4f', 'vm-uuid': '61ff0a09-3e7d-4597-9e71-af032c6a774f'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:30 compute-0 NetworkManager[1028]: <info>  [1760027430.0628] manager: (tap2a45be1f-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.070 2 INFO os_vif [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:5a:4f,bridge_name='br-int',has_traffic_filtering=True,id=2a45be1f-c21d-4446-9ec2-ecbb726e290e,network=Network(84247d99-b9fb-4f75-af06-dd3e92557a34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a45be1f-c2')
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.070 2 DEBUG nova.virt.libvirt.driver [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.070 2 DEBUG nova.compute.manager [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpzzvfvq4l',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='61ff0a09-3e7d-4597-9e71-af032c6a774f',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.071 2 WARNING neutronclient.v2_0.client [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.283 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Migration for instance 61ff0a09-3e7d-4597-9e71-af032c6a774f refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 09 16:30:30 compute-0 sshd-session[149356]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:30:30 compute-0 sshd-session[149356]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.797 2 INFO nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Updating resource usage from migration 47392f43-dd76-4031-807d-93c0b307b738
Oct 09 16:30:30 compute-0 nova_compute[117331]: 2025-10-09 16:30:30.797 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Starting to track incoming migration 47392f43-dd76-4031-807d-93c0b307b738 with flavor 5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3 _update_usage_from_migration /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1536
Oct 09 16:30:31 compute-0 nova_compute[117331]: 2025-10-09 16:30:31.085 2 WARNING neutronclient.v2_0.client [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:30:31 compute-0 nova_compute[117331]: 2025-10-09 16:30:31.333 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance 144cc4ab-6004-4896-8899-8860557341b0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:30:31 compute-0 openstack_network_exporter[129925]: ERROR   16:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:30:31 compute-0 openstack_network_exporter[129925]: ERROR   16:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:30:31 compute-0 openstack_network_exporter[129925]: ERROR   16:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:30:31 compute-0 openstack_network_exporter[129925]: ERROR   16:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:30:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:30:31 compute-0 openstack_network_exporter[129925]: ERROR   16:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:30:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:30:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:31.585 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:30:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:31.586 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:30:31 compute-0 nova_compute[117331]: 2025-10-09 16:30:31.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:31 compute-0 nova_compute[117331]: 2025-10-09 16:30:31.842 2 WARNING nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance 61ff0a09-3e7d-4597-9e71-af032c6a774f has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.
Oct 09 16:30:31 compute-0 nova_compute[117331]: 2025-10-09 16:30:31.843 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:30:31 compute-0 nova_compute[117331]: 2025-10-09 16:30:31.843 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:30:29 up 39 min,  0 user,  load average: 0.82, 0.66, 0.48\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_c9d397897ee84f2e91c7e33f6c4052a3': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:30:31 compute-0 nova_compute[117331]: 2025-10-09 16:30:31.904 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:30:31 compute-0 nova_compute[117331]: 2025-10-09 16:30:31.965 2 DEBUG nova.network.neutron [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Port 2a45be1f-c21d-4446-9ec2-ecbb726e290e updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Oct 09 16:30:31 compute-0 nova_compute[117331]: 2025-10-09 16:30:31.983 2 DEBUG nova.compute.manager [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpzzvfvq4l',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='61ff0a09-3e7d-4597-9e71-af032c6a774f',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Oct 09 16:30:32 compute-0 nova_compute[117331]: 2025-10-09 16:30:32.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:32 compute-0 nova_compute[117331]: 2025-10-09 16:30:32.411 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:30:32 compute-0 sshd-session[149356]: Failed password for invalid user odoo18 from 36.224.53.32 port 41038 ssh2
Oct 09 16:30:32 compute-0 sshd-session[149356]: Connection closed by invalid user odoo18 36.224.53.32 port 41038 [preauth]
Oct 09 16:30:32 compute-0 nova_compute[117331]: 2025-10-09 16:30:32.921 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:30:32 compute-0 nova_compute[117331]: 2025-10-09 16:30:32.922 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.659s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:30:33 compute-0 podman[149424]: 2025-10-09 16:30:33.854554803 +0000 UTC m=+0.075660670 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.7)
Oct 09 16:30:33 compute-0 podman[149425]: 2025-10-09 16:30:33.902326971 +0000 UTC m=+0.119947628 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 09 16:30:34 compute-0 nova_compute[117331]: 2025-10-09 16:30:34.923 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:30:34 compute-0 nova_compute[117331]: 2025-10-09 16:30:34.924 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:30:34 compute-0 nova_compute[117331]: 2025-10-09 16:30:34.924 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:30:34 compute-0 nova_compute[117331]: 2025-10-09 16:30:34.925 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:30:35 compute-0 nova_compute[117331]: 2025-10-09 16:30:35.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:35 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 09 16:30:35 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 09 16:30:35 compute-0 nova_compute[117331]: 2025-10-09 16:30:35.302 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:30:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:35.323 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:30:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:35.323 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:30:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:35.323 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:30:35 compute-0 kernel: tap2a45be1f-c2: entered promiscuous mode
Oct 09 16:30:35 compute-0 ovn_controller[19752]: 2025-10-09T16:30:35Z|00190|binding|INFO|Claiming lport 2a45be1f-c21d-4446-9ec2-ecbb726e290e for this additional chassis.
Oct 09 16:30:35 compute-0 nova_compute[117331]: 2025-10-09 16:30:35.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:35 compute-0 ovn_controller[19752]: 2025-10-09T16:30:35Z|00191|binding|INFO|2a45be1f-c21d-4446-9ec2-ecbb726e290e: Claiming fa:16:3e:f7:5a:4f 10.100.0.10
Oct 09 16:30:35 compute-0 NetworkManager[1028]: <info>  [1760027435.4038] manager: (tap2a45be1f-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Oct 09 16:30:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:35.411 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:5a:4f 10.100.0.10'], port_security=['fa:16:3e:f7:5a:4f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '61ff0a09-3e7d-4597-9e71-af032c6a774f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84247d99-b9fb-4f75-af06-dd3e92557a34', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9d397897ee84f2e91c7e33f6c4052a3', 'neutron:revision_number': '10', 'neutron:security_group_ids': '2f00962f-5e5e-41c9-9e03-af9a8b3a3c40', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83685448-9277-4bf7-9118-ff77ca4cb703, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=2a45be1f-c21d-4446-9ec2-ecbb726e290e) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:30:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:35.411 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 2a45be1f-c21d-4446-9ec2-ecbb726e290e in datapath 84247d99-b9fb-4f75-af06-dd3e92557a34 unbound from our chassis
Oct 09 16:30:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:35.413 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84247d99-b9fb-4f75-af06-dd3e92557a34
Oct 09 16:30:35 compute-0 ovn_controller[19752]: 2025-10-09T16:30:35Z|00192|binding|INFO|Setting lport 2a45be1f-c21d-4446-9ec2-ecbb726e290e ovn-installed in OVS
Oct 09 16:30:35 compute-0 nova_compute[117331]: 2025-10-09 16:30:35.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:35 compute-0 nova_compute[117331]: 2025-10-09 16:30:35.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:35.427 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[930da760-5dfc-4c9a-b925-2a6505b9bbd1]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:35 compute-0 systemd-udevd[149503]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:30:35 compute-0 systemd-machined[77487]: New machine qemu-15-instance-00000015.
Oct 09 16:30:35 compute-0 NetworkManager[1028]: <info>  [1760027435.4501] device (tap2a45be1f-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:30:35 compute-0 NetworkManager[1028]: <info>  [1760027435.4508] device (tap2a45be1f-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:30:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:35.456 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[5253d384-982e-4bbd-8396-9d1c1b49a0cb]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:35.458 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[1661e594-c8c3-457b-ab33-117af850e0c5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:35 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-00000015.
Oct 09 16:30:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:35.483 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[9d225405-378a-4637-8609-7adf0f9784d4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:35 compute-0 sshd-session[149422]: Invalid user db2inst1 from 36.224.53.32 port 48230
Oct 09 16:30:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:35.504 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ad890412-b89c-44ab-b1f4-4f35a25fecdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84247d99-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:14:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 230833, 'reachable_time': 36167, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 149512, 'error': None, 'target': 'ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:35.521 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[dae2dd7c-7cd6-4c07-9772-6963b1829179]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap84247d99-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 230841, 'tstamp': 230841}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 149515, 'error': None, 'target': 'ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap84247d99-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 230844, 'tstamp': 230844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 149515, 'error': None, 'target': 'ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:35.522 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84247d99-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:30:35 compute-0 nova_compute[117331]: 2025-10-09 16:30:35.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:35 compute-0 nova_compute[117331]: 2025-10-09 16:30:35.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:35.526 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84247d99-b0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:30:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:35.526 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:30:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:35.526 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84247d99-b0, col_values=(('external_ids', {'iface-id': '091675d2-c61b-4a23-b064-d4ca0295fca5'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:30:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:35.526 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:30:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:35.531 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[af9dad85-32e1-4021-8e2b-312586e9060f]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-84247d99-b9fb-4f75-af06-dd3e92557a34\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/84247d99-b9fb-4f75-af06-dd3e92557a34.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 84247d99-b9fb-4f75-af06-dd3e92557a34\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:36 compute-0 sshd-session[149422]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:30:36 compute-0 sshd-session[149422]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:30:37 compute-0 nova_compute[117331]: 2025-10-09 16:30:37.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:37 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:37.589 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:30:38 compute-0 ovn_controller[19752]: 2025-10-09T16:30:38Z|00193|binding|INFO|Claiming lport 2a45be1f-c21d-4446-9ec2-ecbb726e290e for this chassis.
Oct 09 16:30:38 compute-0 ovn_controller[19752]: 2025-10-09T16:30:38Z|00194|binding|INFO|2a45be1f-c21d-4446-9ec2-ecbb726e290e: Claiming fa:16:3e:f7:5a:4f 10.100.0.10
Oct 09 16:30:38 compute-0 ovn_controller[19752]: 2025-10-09T16:30:38Z|00195|binding|INFO|Setting lport 2a45be1f-c21d-4446-9ec2-ecbb726e290e up in Southbound
Oct 09 16:30:39 compute-0 sshd-session[149422]: Failed password for invalid user db2inst1 from 36.224.53.32 port 48230 ssh2
Oct 09 16:30:39 compute-0 nova_compute[117331]: 2025-10-09 16:30:39.245 2 INFO nova.compute.manager [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Post operation of migration started
Oct 09 16:30:39 compute-0 nova_compute[117331]: 2025-10-09 16:30:39.246 2 WARNING neutronclient.v2_0.client [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:30:39 compute-0 nova_compute[117331]: 2025-10-09 16:30:39.351 2 WARNING neutronclient.v2_0.client [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:30:39 compute-0 nova_compute[117331]: 2025-10-09 16:30:39.352 2 WARNING neutronclient.v2_0.client [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:30:39 compute-0 nova_compute[117331]: 2025-10-09 16:30:39.427 2 DEBUG oslo_concurrency.lockutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-61ff0a09-3e7d-4597-9e71-af032c6a774f" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:30:39 compute-0 nova_compute[117331]: 2025-10-09 16:30:39.428 2 DEBUG oslo_concurrency.lockutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-61ff0a09-3e7d-4597-9e71-af032c6a774f" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:30:39 compute-0 nova_compute[117331]: 2025-10-09 16:30:39.428 2 DEBUG nova.network.neutron [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:30:39 compute-0 nova_compute[117331]: 2025-10-09 16:30:39.934 2 WARNING neutronclient.v2_0.client [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:30:40 compute-0 nova_compute[117331]: 2025-10-09 16:30:40.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:40 compute-0 nova_compute[117331]: 2025-10-09 16:30:40.382 2 WARNING neutronclient.v2_0.client [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:30:40 compute-0 nova_compute[117331]: 2025-10-09 16:30:40.544 2 DEBUG nova.network.neutron [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Updating instance_info_cache with network_info: [{"id": "2a45be1f-c21d-4446-9ec2-ecbb726e290e", "address": "fa:16:3e:f7:5a:4f", "network": {"id": "84247d99-b9fb-4f75-af06-dd3e92557a34", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-187667545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f838d4a2de74d8fbb8e91e7ef351b24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a45be1f-c2", "ovs_interfaceid": "2a45be1f-c21d-4446-9ec2-ecbb726e290e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:30:41 compute-0 sshd-session[149422]: Connection closed by invalid user db2inst1 36.224.53.32 port 48230 [preauth]
Oct 09 16:30:41 compute-0 nova_compute[117331]: 2025-10-09 16:30:41.054 2 DEBUG oslo_concurrency.lockutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-61ff0a09-3e7d-4597-9e71-af032c6a774f" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:30:41 compute-0 nova_compute[117331]: 2025-10-09 16:30:41.572 2 DEBUG oslo_concurrency.lockutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:30:41 compute-0 nova_compute[117331]: 2025-10-09 16:30:41.572 2 DEBUG oslo_concurrency.lockutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:30:41 compute-0 nova_compute[117331]: 2025-10-09 16:30:41.573 2 DEBUG oslo_concurrency.lockutils [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:30:41 compute-0 nova_compute[117331]: 2025-10-09 16:30:41.578 2 INFO nova.virt.libvirt.driver [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 09 16:30:41 compute-0 virtqemud[117629]: Domain id=15 name='instance-00000015' uuid=61ff0a09-3e7d-4597-9e71-af032c6a774f is tainted: custom-monitor
Oct 09 16:30:42 compute-0 nova_compute[117331]: 2025-10-09 16:30:42.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:42 compute-0 nova_compute[117331]: 2025-10-09 16:30:42.589 2 INFO nova.virt.libvirt.driver [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 09 16:30:43 compute-0 nova_compute[117331]: 2025-10-09 16:30:43.597 2 INFO nova.virt.libvirt.driver [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 09 16:30:43 compute-0 nova_compute[117331]: 2025-10-09 16:30:43.602 2 DEBUG nova.compute.manager [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:30:44 compute-0 nova_compute[117331]: 2025-10-09 16:30:44.128 2 DEBUG nova.objects.instance [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 09 16:30:45 compute-0 nova_compute[117331]: 2025-10-09 16:30:45.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:45 compute-0 sshd-session[149538]: Invalid user glassfish from 36.224.53.32 port 56602
Oct 09 16:30:45 compute-0 nova_compute[117331]: 2025-10-09 16:30:45.157 2 WARNING neutronclient.v2_0.client [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:30:45 compute-0 nova_compute[117331]: 2025-10-09 16:30:45.657 2 WARNING neutronclient.v2_0.client [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:30:45 compute-0 nova_compute[117331]: 2025-10-09 16:30:45.657 2 WARNING neutronclient.v2_0.client [None req-ef73c0c9-7283-4984-a115-f46b6c00b6e3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:30:45 compute-0 sshd-session[149538]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:30:45 compute-0 sshd-session[149538]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:30:47 compute-0 nova_compute[117331]: 2025-10-09 16:30:47.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:47 compute-0 nova_compute[117331]: 2025-10-09 16:30:47.794 2 DEBUG oslo_concurrency.lockutils [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Acquiring lock "61ff0a09-3e7d-4597-9e71-af032c6a774f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:30:47 compute-0 nova_compute[117331]: 2025-10-09 16:30:47.795 2 DEBUG oslo_concurrency.lockutils [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "61ff0a09-3e7d-4597-9e71-af032c6a774f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:30:47 compute-0 nova_compute[117331]: 2025-10-09 16:30:47.795 2 DEBUG oslo_concurrency.lockutils [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Acquiring lock "61ff0a09-3e7d-4597-9e71-af032c6a774f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:30:47 compute-0 nova_compute[117331]: 2025-10-09 16:30:47.796 2 DEBUG oslo_concurrency.lockutils [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "61ff0a09-3e7d-4597-9e71-af032c6a774f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:30:47 compute-0 nova_compute[117331]: 2025-10-09 16:30:47.796 2 DEBUG oslo_concurrency.lockutils [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "61ff0a09-3e7d-4597-9e71-af032c6a774f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:30:47 compute-0 nova_compute[117331]: 2025-10-09 16:30:47.816 2 INFO nova.compute.manager [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Terminating instance
Oct 09 16:30:48 compute-0 sshd-session[149538]: Failed password for invalid user glassfish from 36.224.53.32 port 56602 ssh2
Oct 09 16:30:48 compute-0 sshd-session[149538]: Connection closed by invalid user glassfish 36.224.53.32 port 56602 [preauth]
Oct 09 16:30:48 compute-0 nova_compute[117331]: 2025-10-09 16:30:48.376 2 DEBUG nova.compute.manager [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 09 16:30:48 compute-0 kernel: tap2a45be1f-c2 (unregistering): left promiscuous mode
Oct 09 16:30:48 compute-0 NetworkManager[1028]: <info>  [1760027448.4138] device (tap2a45be1f-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:30:48 compute-0 ovn_controller[19752]: 2025-10-09T16:30:48Z|00196|binding|INFO|Releasing lport 2a45be1f-c21d-4446-9ec2-ecbb726e290e from this chassis (sb_readonly=0)
Oct 09 16:30:48 compute-0 ovn_controller[19752]: 2025-10-09T16:30:48Z|00197|binding|INFO|Setting lport 2a45be1f-c21d-4446-9ec2-ecbb726e290e down in Southbound
Oct 09 16:30:48 compute-0 nova_compute[117331]: 2025-10-09 16:30:48.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:48 compute-0 ovn_controller[19752]: 2025-10-09T16:30:48Z|00198|binding|INFO|Removing iface tap2a45be1f-c2 ovn-installed in OVS
Oct 09 16:30:48 compute-0 nova_compute[117331]: 2025-10-09 16:30:48.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:48.434 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:5a:4f 10.100.0.10'], port_security=['fa:16:3e:f7:5a:4f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '61ff0a09-3e7d-4597-9e71-af032c6a774f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84247d99-b9fb-4f75-af06-dd3e92557a34', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9d397897ee84f2e91c7e33f6c4052a3', 'neutron:revision_number': '15', 'neutron:security_group_ids': '2f00962f-5e5e-41c9-9e03-af9a8b3a3c40', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83685448-9277-4bf7-9118-ff77ca4cb703, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=2a45be1f-c21d-4446-9ec2-ecbb726e290e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:30:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:48.435 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 2a45be1f-c21d-4446-9ec2-ecbb726e290e in datapath 84247d99-b9fb-4f75-af06-dd3e92557a34 unbound from our chassis
Oct 09 16:30:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:48.436 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84247d99-b9fb-4f75-af06-dd3e92557a34
Oct 09 16:30:48 compute-0 nova_compute[117331]: 2025-10-09 16:30:48.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:48.461 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[038302c1-d612-4ce0-a4e4-ea016b614ea1]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:48.491 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[db127c30-0b34-4c0f-94be-bd1eee899466]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:48.495 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[9a14eeb1-f5f0-46ce-899e-a145ef5cbf8a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:48 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000015.scope: Deactivated successfully.
Oct 09 16:30:48 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000015.scope: Consumed 2.219s CPU time.
Oct 09 16:30:48 compute-0 systemd-machined[77487]: Machine qemu-15-instance-00000015 terminated.
Oct 09 16:30:48 compute-0 podman[149541]: 2025-10-09 16:30:48.507785941 +0000 UTC m=+0.063718223 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.build-date=20251007)
Oct 09 16:30:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:48.524 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[5645a1ea-aa8a-414f-b5f9-34ffb5c41c62]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:48.540 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3f3a0636-3d04-4445-89ca-32ec01480d5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84247d99-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:14:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 230833, 'reachable_time': 36167, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 149570, 'error': None, 'target': 'ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:48.554 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[c50d4c12-cacf-428e-9913-1d9243d01af6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap84247d99-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 230841, 'tstamp': 230841}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 149572, 'error': None, 'target': 'ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap84247d99-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 230844, 'tstamp': 230844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 149572, 'error': None, 'target': 'ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:48.555 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84247d99-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:30:48 compute-0 nova_compute[117331]: 2025-10-09 16:30:48.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:48 compute-0 nova_compute[117331]: 2025-10-09 16:30:48.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:48.561 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84247d99-b0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:30:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:48.562 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:30:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:48.562 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84247d99-b0, col_values=(('external_ids', {'iface-id': '091675d2-c61b-4a23-b064-d4ca0295fca5'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:30:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:48.562 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:30:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:48.563 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[aa15cffc-232b-45e5-b460-8d8ceacd3510]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-84247d99-b9fb-4f75-af06-dd3e92557a34\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/84247d99-b9fb-4f75-af06-dd3e92557a34.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 84247d99-b9fb-4f75-af06-dd3e92557a34\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:48 compute-0 nova_compute[117331]: 2025-10-09 16:30:48.632 2 INFO nova.virt.libvirt.driver [-] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Instance destroyed successfully.
Oct 09 16:30:48 compute-0 nova_compute[117331]: 2025-10-09 16:30:48.632 2 DEBUG nova.objects.instance [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lazy-loading 'resources' on Instance uuid 61ff0a09-3e7d-4597-9e71-af032c6a774f obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.139 2 DEBUG nova.virt.libvirt.vif [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=1,config_drive='True',created_at=2025-10-09T16:29:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-513673060',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutevmworkloadbalancestrategy-server-513673060',id=21,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:29:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c9d397897ee84f2e91c7e33f6c4052a3',ramdisk_id='',reservation_id='r-ncyxwz27',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',clean_attempts='1',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteVmWorkloadBalanceStrategy-1125458264',owner_user_name='tempest-TestExecuteVmWorkloadBalanceStrategy-1125458264-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:30:44Z,user_data=None,user_id='ea71b67fc37c45859db08bb5231d7a56',uuid=61ff0a09-3e7d-4597-9e71-af032c6a774f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a45be1f-c21d-4446-9ec2-ecbb726e290e", "address": "fa:16:3e:f7:5a:4f", "network": {"id": "84247d99-b9fb-4f75-af06-dd3e92557a34", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-187667545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f838d4a2de74d8fbb8e91e7ef351b24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a45be1f-c2", "ovs_interfaceid": "2a45be1f-c21d-4446-9ec2-ecbb726e290e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.139 2 DEBUG nova.network.os_vif_util [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Converting VIF {"id": "2a45be1f-c21d-4446-9ec2-ecbb726e290e", "address": "fa:16:3e:f7:5a:4f", "network": {"id": "84247d99-b9fb-4f75-af06-dd3e92557a34", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-187667545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f838d4a2de74d8fbb8e91e7ef351b24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a45be1f-c2", "ovs_interfaceid": "2a45be1f-c21d-4446-9ec2-ecbb726e290e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.139 2 DEBUG nova.network.os_vif_util [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:5a:4f,bridge_name='br-int',has_traffic_filtering=True,id=2a45be1f-c21d-4446-9ec2-ecbb726e290e,network=Network(84247d99-b9fb-4f75-af06-dd3e92557a34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a45be1f-c2') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.140 2 DEBUG os_vif [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:5a:4f,bridge_name='br-int',has_traffic_filtering=True,id=2a45be1f-c21d-4446-9ec2-ecbb726e290e,network=Network(84247d99-b9fb-4f75-af06-dd3e92557a34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a45be1f-c2') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.142 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a45be1f-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.145 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=0ca7b11a-26fb-4513-994a-79c348018022) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.149 2 INFO os_vif [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:5a:4f,bridge_name='br-int',has_traffic_filtering=True,id=2a45be1f-c21d-4446-9ec2-ecbb726e290e,network=Network(84247d99-b9fb-4f75-af06-dd3e92557a34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a45be1f-c2')
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.149 2 INFO nova.virt.libvirt.driver [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Deleting instance files /var/lib/nova/instances/61ff0a09-3e7d-4597-9e71-af032c6a774f_del
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.150 2 INFO nova.virt.libvirt.driver [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Deletion of /var/lib/nova/instances/61ff0a09-3e7d-4597-9e71-af032c6a774f_del complete
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.240 2 DEBUG nova.compute.manager [req-1b8456f2-9023-45a7-beae-64759c3af00b req-b6474cfa-9f4f-4572-a23b-9edfc6d066be ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Received event network-vif-unplugged-2a45be1f-c21d-4446-9ec2-ecbb726e290e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.241 2 DEBUG oslo_concurrency.lockutils [req-1b8456f2-9023-45a7-beae-64759c3af00b req-b6474cfa-9f4f-4572-a23b-9edfc6d066be ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "61ff0a09-3e7d-4597-9e71-af032c6a774f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.241 2 DEBUG oslo_concurrency.lockutils [req-1b8456f2-9023-45a7-beae-64759c3af00b req-b6474cfa-9f4f-4572-a23b-9edfc6d066be ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "61ff0a09-3e7d-4597-9e71-af032c6a774f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.241 2 DEBUG oslo_concurrency.lockutils [req-1b8456f2-9023-45a7-beae-64759c3af00b req-b6474cfa-9f4f-4572-a23b-9edfc6d066be ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "61ff0a09-3e7d-4597-9e71-af032c6a774f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.241 2 DEBUG nova.compute.manager [req-1b8456f2-9023-45a7-beae-64759c3af00b req-b6474cfa-9f4f-4572-a23b-9edfc6d066be ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] No waiting events found dispatching network-vif-unplugged-2a45be1f-c21d-4446-9ec2-ecbb726e290e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.241 2 DEBUG nova.compute.manager [req-1b8456f2-9023-45a7-beae-64759c3af00b req-b6474cfa-9f4f-4572-a23b-9edfc6d066be ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Received event network-vif-unplugged-2a45be1f-c21d-4446-9ec2-ecbb726e290e for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.662 2 INFO nova.compute.manager [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Took 1.29 seconds to destroy the instance on the hypervisor.
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.663 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.663 2 DEBUG nova.compute.manager [-] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.664 2 DEBUG nova.network.neutron [-] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 09 16:30:49 compute-0 nova_compute[117331]: 2025-10-09 16:30:49.664 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:30:50 compute-0 nova_compute[117331]: 2025-10-09 16:30:50.100 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:30:50 compute-0 nova_compute[117331]: 2025-10-09 16:30:50.427 2 DEBUG nova.compute.manager [req-b30f9498-f926-4d75-a50d-b82257d8bd14 req-8b900a6a-a732-49af-87c0-4c614288c419 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Received event network-vif-deleted-2a45be1f-c21d-4446-9ec2-ecbb726e290e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:30:50 compute-0 nova_compute[117331]: 2025-10-09 16:30:50.428 2 INFO nova.compute.manager [req-b30f9498-f926-4d75-a50d-b82257d8bd14 req-8b900a6a-a732-49af-87c0-4c614288c419 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Neutron deleted interface 2a45be1f-c21d-4446-9ec2-ecbb726e290e; detaching it from the instance and deleting it from the info cache
Oct 09 16:30:50 compute-0 nova_compute[117331]: 2025-10-09 16:30:50.428 2 DEBUG nova.network.neutron [req-b30f9498-f926-4d75-a50d-b82257d8bd14 req-8b900a6a-a732-49af-87c0-4c614288c419 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:30:50 compute-0 podman[149592]: 2025-10-09 16:30:50.824496599 +0000 UTC m=+0.055541244 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:30:50 compute-0 nova_compute[117331]: 2025-10-09 16:30:50.876 2 DEBUG nova.network.neutron [-] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:30:50 compute-0 nova_compute[117331]: 2025-10-09 16:30:50.946 2 DEBUG nova.compute.manager [req-b30f9498-f926-4d75-a50d-b82257d8bd14 req-8b900a6a-a732-49af-87c0-4c614288c419 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Detach interface failed, port_id=2a45be1f-c21d-4446-9ec2-ecbb726e290e, reason: Instance 61ff0a09-3e7d-4597-9e71-af032c6a774f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 09 16:30:51 compute-0 nova_compute[117331]: 2025-10-09 16:30:51.308 2 DEBUG nova.compute.manager [req-3ec05310-d059-4eaa-96f6-6480c8836f10 req-4b1b909c-4d6b-4857-a1f1-625585a08a00 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Received event network-vif-unplugged-2a45be1f-c21d-4446-9ec2-ecbb726e290e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:30:51 compute-0 nova_compute[117331]: 2025-10-09 16:30:51.308 2 DEBUG oslo_concurrency.lockutils [req-3ec05310-d059-4eaa-96f6-6480c8836f10 req-4b1b909c-4d6b-4857-a1f1-625585a08a00 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "61ff0a09-3e7d-4597-9e71-af032c6a774f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:30:51 compute-0 nova_compute[117331]: 2025-10-09 16:30:51.309 2 DEBUG oslo_concurrency.lockutils [req-3ec05310-d059-4eaa-96f6-6480c8836f10 req-4b1b909c-4d6b-4857-a1f1-625585a08a00 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "61ff0a09-3e7d-4597-9e71-af032c6a774f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:30:51 compute-0 nova_compute[117331]: 2025-10-09 16:30:51.309 2 DEBUG oslo_concurrency.lockutils [req-3ec05310-d059-4eaa-96f6-6480c8836f10 req-4b1b909c-4d6b-4857-a1f1-625585a08a00 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "61ff0a09-3e7d-4597-9e71-af032c6a774f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:30:51 compute-0 nova_compute[117331]: 2025-10-09 16:30:51.309 2 DEBUG nova.compute.manager [req-3ec05310-d059-4eaa-96f6-6480c8836f10 req-4b1b909c-4d6b-4857-a1f1-625585a08a00 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] No waiting events found dispatching network-vif-unplugged-2a45be1f-c21d-4446-9ec2-ecbb726e290e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:30:51 compute-0 nova_compute[117331]: 2025-10-09 16:30:51.309 2 DEBUG nova.compute.manager [req-3ec05310-d059-4eaa-96f6-6480c8836f10 req-4b1b909c-4d6b-4857-a1f1-625585a08a00 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Received event network-vif-unplugged-2a45be1f-c21d-4446-9ec2-ecbb726e290e for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:30:51 compute-0 nova_compute[117331]: 2025-10-09 16:30:51.384 2 INFO nova.compute.manager [-] [instance: 61ff0a09-3e7d-4597-9e71-af032c6a774f] Took 1.72 seconds to deallocate network for instance.
Oct 09 16:30:51 compute-0 nova_compute[117331]: 2025-10-09 16:30:51.910 2 DEBUG oslo_concurrency.lockutils [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:30:51 compute-0 nova_compute[117331]: 2025-10-09 16:30:51.911 2 DEBUG oslo_concurrency.lockutils [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:30:51 compute-0 nova_compute[117331]: 2025-10-09 16:30:51.916 2 DEBUG oslo_concurrency.lockutils [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:30:51 compute-0 nova_compute[117331]: 2025-10-09 16:30:51.939 2 INFO nova.scheduler.client.report [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Deleted allocations for instance 61ff0a09-3e7d-4597-9e71-af032c6a774f
Oct 09 16:30:52 compute-0 nova_compute[117331]: 2025-10-09 16:30:52.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:52 compute-0 nova_compute[117331]: 2025-10-09 16:30:52.970 2 DEBUG oslo_concurrency.lockutils [None req-47fd7829-673f-4e16-9093-4f5996520901 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "61ff0a09-3e7d-4597-9e71-af032c6a774f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.175s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:30:53 compute-0 sshd-session[149540]: Invalid user vagrant from 36.224.53.32 port 37350
Oct 09 16:30:53 compute-0 nova_compute[117331]: 2025-10-09 16:30:53.656 2 DEBUG oslo_concurrency.lockutils [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Acquiring lock "144cc4ab-6004-4896-8899-8860557341b0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:30:53 compute-0 nova_compute[117331]: 2025-10-09 16:30:53.656 2 DEBUG oslo_concurrency.lockutils [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "144cc4ab-6004-4896-8899-8860557341b0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:30:53 compute-0 nova_compute[117331]: 2025-10-09 16:30:53.657 2 DEBUG oslo_concurrency.lockutils [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Acquiring lock "144cc4ab-6004-4896-8899-8860557341b0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:30:53 compute-0 nova_compute[117331]: 2025-10-09 16:30:53.657 2 DEBUG oslo_concurrency.lockutils [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "144cc4ab-6004-4896-8899-8860557341b0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:30:53 compute-0 nova_compute[117331]: 2025-10-09 16:30:53.657 2 DEBUG oslo_concurrency.lockutils [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "144cc4ab-6004-4896-8899-8860557341b0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:30:53 compute-0 nova_compute[117331]: 2025-10-09 16:30:53.669 2 INFO nova.compute.manager [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Terminating instance
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.182 2 DEBUG nova.compute.manager [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 09 16:30:54 compute-0 kernel: tap1e79672b-17 (unregistering): left promiscuous mode
Oct 09 16:30:54 compute-0 NetworkManager[1028]: <info>  [1760027454.2082] device (tap1e79672b-17): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:30:54 compute-0 sshd-session[149540]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:30:54 compute-0 sshd-session[149540]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:54 compute-0 ovn_controller[19752]: 2025-10-09T16:30:54Z|00199|binding|INFO|Releasing lport 1e79672b-1776-4292-b861-3666e6b4dc69 from this chassis (sb_readonly=0)
Oct 09 16:30:54 compute-0 ovn_controller[19752]: 2025-10-09T16:30:54Z|00200|binding|INFO|Setting lport 1e79672b-1776-4292-b861-3666e6b4dc69 down in Southbound
Oct 09 16:30:54 compute-0 ovn_controller[19752]: 2025-10-09T16:30:54Z|00201|binding|INFO|Removing iface tap1e79672b-17 ovn-installed in OVS
Oct 09 16:30:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:54.225 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:0a:d8 10.100.0.8'], port_security=['fa:16:3e:de:0a:d8 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '144cc4ab-6004-4896-8899-8860557341b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84247d99-b9fb-4f75-af06-dd3e92557a34', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9d397897ee84f2e91c7e33f6c4052a3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2f00962f-5e5e-41c9-9e03-af9a8b3a3c40', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83685448-9277-4bf7-9118-ff77ca4cb703, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=1e79672b-1776-4292-b861-3666e6b4dc69) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:30:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:54.227 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 1e79672b-1776-4292-b861-3666e6b4dc69 in datapath 84247d99-b9fb-4f75-af06-dd3e92557a34 unbound from our chassis
Oct 09 16:30:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:54.228 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84247d99-b9fb-4f75-af06-dd3e92557a34, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:30:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:54.229 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3525c193-b956-4638-8ab1-3f01f0b5d597]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:54.229 28613 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34 namespace which is not needed anymore
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:54 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000014.scope: Deactivated successfully.
Oct 09 16:30:54 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000014.scope: Consumed 15.372s CPU time.
Oct 09 16:30:54 compute-0 systemd-machined[77487]: Machine qemu-14-instance-00000014 terminated.
Oct 09 16:30:54 compute-0 podman[149642]: 2025-10-09 16:30:54.349424825 +0000 UTC m=+0.029727780 container kill f05642e7a9cb4e79d64d2d1dd00dd50188be0c698fc2836314fcedb587c79c63 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 09 16:30:54 compute-0 neutron-haproxy-ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34[149073]: [NOTICE]   (149077) : haproxy version is 3.0.5-8e879a5
Oct 09 16:30:54 compute-0 neutron-haproxy-ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34[149073]: [NOTICE]   (149077) : path to executable is /usr/sbin/haproxy
Oct 09 16:30:54 compute-0 neutron-haproxy-ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34[149073]: [WARNING]  (149077) : Exiting Master process...
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.349 2 DEBUG nova.compute.manager [req-8211be1b-467c-424d-927b-ff9bdcaf846c req-013ffaaf-af08-4ff3-883f-62066ad0ff6a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Received event network-vif-unplugged-1e79672b-1776-4292-b861-3666e6b4dc69 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.349 2 DEBUG oslo_concurrency.lockutils [req-8211be1b-467c-424d-927b-ff9bdcaf846c req-013ffaaf-af08-4ff3-883f-62066ad0ff6a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "144cc4ab-6004-4896-8899-8860557341b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.350 2 DEBUG oslo_concurrency.lockutils [req-8211be1b-467c-424d-927b-ff9bdcaf846c req-013ffaaf-af08-4ff3-883f-62066ad0ff6a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "144cc4ab-6004-4896-8899-8860557341b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.350 2 DEBUG oslo_concurrency.lockutils [req-8211be1b-467c-424d-927b-ff9bdcaf846c req-013ffaaf-af08-4ff3-883f-62066ad0ff6a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "144cc4ab-6004-4896-8899-8860557341b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.350 2 DEBUG nova.compute.manager [req-8211be1b-467c-424d-927b-ff9bdcaf846c req-013ffaaf-af08-4ff3-883f-62066ad0ff6a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] No waiting events found dispatching network-vif-unplugged-1e79672b-1776-4292-b861-3666e6b4dc69 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.350 2 DEBUG nova.compute.manager [req-8211be1b-467c-424d-927b-ff9bdcaf846c req-013ffaaf-af08-4ff3-883f-62066ad0ff6a ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Received event network-vif-unplugged-1e79672b-1776-4292-b861-3666e6b4dc69 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:30:54 compute-0 neutron-haproxy-ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34[149073]: [ALERT]    (149077) : Current worker (149079) exited with code 143 (Terminated)
Oct 09 16:30:54 compute-0 neutron-haproxy-ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34[149073]: [WARNING]  (149077) : All workers exited. Exiting... (0)
Oct 09 16:30:54 compute-0 systemd[1]: libpod-f05642e7a9cb4e79d64d2d1dd00dd50188be0c698fc2836314fcedb587c79c63.scope: Deactivated successfully.
Oct 09 16:30:54 compute-0 podman[149657]: 2025-10-09 16:30:54.391032899 +0000 UTC m=+0.023964089 container died f05642e7a9cb4e79d64d2d1dd00dd50188be0c698fc2836314fcedb587c79c63 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Oct 09 16:30:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f05642e7a9cb4e79d64d2d1dd00dd50188be0c698fc2836314fcedb587c79c63-userdata-shm.mount: Deactivated successfully.
Oct 09 16:30:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-5662258e0c0a0d8b044a81e74030053f016eba96b96ec3641000824b1b2c0068-merged.mount: Deactivated successfully.
Oct 09 16:30:54 compute-0 podman[149657]: 2025-10-09 16:30:54.447238593 +0000 UTC m=+0.080169763 container cleanup f05642e7a9cb4e79d64d2d1dd00dd50188be0c698fc2836314fcedb587c79c63 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.451 2 INFO nova.virt.libvirt.driver [-] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Instance destroyed successfully.
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.451 2 DEBUG nova.objects.instance [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lazy-loading 'resources' on Instance uuid 144cc4ab-6004-4896-8899-8860557341b0 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:30:54 compute-0 systemd[1]: libpod-conmon-f05642e7a9cb4e79d64d2d1dd00dd50188be0c698fc2836314fcedb587c79c63.scope: Deactivated successfully.
Oct 09 16:30:54 compute-0 podman[149659]: 2025-10-09 16:30:54.460923935 +0000 UTC m=+0.085490660 container remove f05642e7a9cb4e79d64d2d1dd00dd50188be0c698fc2836314fcedb587c79c63 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:30:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:54.481 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[60820572-c552-4a79-9754-a082a57b4f07]: (4, ("Thu Oct  9 04:30:54 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34 (f05642e7a9cb4e79d64d2d1dd00dd50188be0c698fc2836314fcedb587c79c63)\nf05642e7a9cb4e79d64d2d1dd00dd50188be0c698fc2836314fcedb587c79c63\nThu Oct  9 04:30:54 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34 (f05642e7a9cb4e79d64d2d1dd00dd50188be0c698fc2836314fcedb587c79c63)\nf05642e7a9cb4e79d64d2d1dd00dd50188be0c698fc2836314fcedb587c79c63\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:54.482 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[fe9fc316-e99e-4da2-99e8-25e0953d5801]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:54.483 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84247d99-b9fb-4f75-af06-dd3e92557a34.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84247d99-b9fb-4f75-af06-dd3e92557a34.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:30:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:54.483 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[69c753a4-6473-4993-91c8-d6c42abbd4e6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:54.484 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84247d99-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:54 compute-0 kernel: tap84247d99-b0: left promiscuous mode
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:54.537 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[695b75f9-729a-46df-bd54-9c9890acab48]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:54.568 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[db9528e4-701a-4760-b33d-9482a4e36fac]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:54.569 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[66954fc9-9fd2-4129-a20b-8a5a1d074077]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:54.588 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[1f95f614-a4f7-47ff-b238-6c7f56484304]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 230827, 'reachable_time': 39021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 149707, 'error': None, 'target': 'ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:54.590 28727 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-84247d99-b9fb-4f75-af06-dd3e92557a34 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 09 16:30:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d84247d99\x2db9fb\x2d4f75\x2daf06\x2ddd3e92557a34.mount: Deactivated successfully.
Oct 09 16:30:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:30:54.590 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[e67e2576-0d0c-4497-b2a4-49c5a37fb497]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.960 2 DEBUG nova.virt.libvirt.vif [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:29:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-1280932969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutevmworkloadbalancestrategy-server-1280932969',id=20,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:29:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c9d397897ee84f2e91c7e33f6c4052a3',ramdisk_id='',reservation_id='r-5y8wuj64',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteVmWorkloadBalanceStrategy-1125458264',owner_user_name='tempest-TestExecuteVmWorkloadBalanceStrategy-1125458264-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:29:26Z,user_data=None,user_id='ea71b67fc37c45859db08bb5231d7a56',uuid=144cc4ab-6004-4896-8899-8860557341b0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e79672b-1776-4292-b861-3666e6b4dc69", "address": "fa:16:3e:de:0a:d8", "network": {"id": "84247d99-b9fb-4f75-af06-dd3e92557a34", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-187667545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f838d4a2de74d8fbb8e91e7ef351b24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e79672b-17", "ovs_interfaceid": "1e79672b-1776-4292-b861-3666e6b4dc69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.960 2 DEBUG nova.network.os_vif_util [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Converting VIF {"id": "1e79672b-1776-4292-b861-3666e6b4dc69", "address": "fa:16:3e:de:0a:d8", "network": {"id": "84247d99-b9fb-4f75-af06-dd3e92557a34", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-187667545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f838d4a2de74d8fbb8e91e7ef351b24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e79672b-17", "ovs_interfaceid": "1e79672b-1776-4292-b861-3666e6b4dc69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.961 2 DEBUG nova.network.os_vif_util [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:0a:d8,bridge_name='br-int',has_traffic_filtering=True,id=1e79672b-1776-4292-b861-3666e6b4dc69,network=Network(84247d99-b9fb-4f75-af06-dd3e92557a34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e79672b-17') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.961 2 DEBUG os_vif [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:0a:d8,bridge_name='br-int',has_traffic_filtering=True,id=1e79672b-1776-4292-b861-3666e6b4dc69,network=Network(84247d99-b9fb-4f75-af06-dd3e92557a34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e79672b-17') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.962 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e79672b-17, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.965 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=c3de9522-a440-4d52-910d-396d4d624273) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.968 2 INFO os_vif [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:0a:d8,bridge_name='br-int',has_traffic_filtering=True,id=1e79672b-1776-4292-b861-3666e6b4dc69,network=Network(84247d99-b9fb-4f75-af06-dd3e92557a34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e79672b-17')
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.968 2 INFO nova.virt.libvirt.driver [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Deleting instance files /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0_del
Oct 09 16:30:54 compute-0 nova_compute[117331]: 2025-10-09 16:30:54.969 2 INFO nova.virt.libvirt.driver [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Deletion of /var/lib/nova/instances/144cc4ab-6004-4896-8899-8860557341b0_del complete
Oct 09 16:30:55 compute-0 nova_compute[117331]: 2025-10-09 16:30:55.480 2 INFO nova.compute.manager [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Took 1.30 seconds to destroy the instance on the hypervisor.
Oct 09 16:30:55 compute-0 nova_compute[117331]: 2025-10-09 16:30:55.480 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 09 16:30:55 compute-0 nova_compute[117331]: 2025-10-09 16:30:55.481 2 DEBUG nova.compute.manager [-] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 09 16:30:55 compute-0 nova_compute[117331]: 2025-10-09 16:30:55.481 2 DEBUG nova.network.neutron [-] [instance: 144cc4ab-6004-4896-8899-8860557341b0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 09 16:30:55 compute-0 nova_compute[117331]: 2025-10-09 16:30:55.481 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:30:56 compute-0 sshd-session[149540]: Failed password for invalid user vagrant from 36.224.53.32 port 37350 ssh2
Oct 09 16:30:56 compute-0 nova_compute[117331]: 2025-10-09 16:30:56.222 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:30:56 compute-0 nova_compute[117331]: 2025-10-09 16:30:56.418 2 DEBUG nova.compute.manager [req-d36ceec3-4e06-4256-954a-0a13c587d9b6 req-893d5fa2-52b9-452a-8142-9340ec78bdb9 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Received event network-vif-unplugged-1e79672b-1776-4292-b861-3666e6b4dc69 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:30:56 compute-0 nova_compute[117331]: 2025-10-09 16:30:56.419 2 DEBUG oslo_concurrency.lockutils [req-d36ceec3-4e06-4256-954a-0a13c587d9b6 req-893d5fa2-52b9-452a-8142-9340ec78bdb9 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "144cc4ab-6004-4896-8899-8860557341b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:30:56 compute-0 nova_compute[117331]: 2025-10-09 16:30:56.419 2 DEBUG oslo_concurrency.lockutils [req-d36ceec3-4e06-4256-954a-0a13c587d9b6 req-893d5fa2-52b9-452a-8142-9340ec78bdb9 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "144cc4ab-6004-4896-8899-8860557341b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:30:56 compute-0 nova_compute[117331]: 2025-10-09 16:30:56.419 2 DEBUG oslo_concurrency.lockutils [req-d36ceec3-4e06-4256-954a-0a13c587d9b6 req-893d5fa2-52b9-452a-8142-9340ec78bdb9 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "144cc4ab-6004-4896-8899-8860557341b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:30:56 compute-0 nova_compute[117331]: 2025-10-09 16:30:56.420 2 DEBUG nova.compute.manager [req-d36ceec3-4e06-4256-954a-0a13c587d9b6 req-893d5fa2-52b9-452a-8142-9340ec78bdb9 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] No waiting events found dispatching network-vif-unplugged-1e79672b-1776-4292-b861-3666e6b4dc69 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:30:56 compute-0 nova_compute[117331]: 2025-10-09 16:30:56.420 2 DEBUG nova.compute.manager [req-d36ceec3-4e06-4256-954a-0a13c587d9b6 req-893d5fa2-52b9-452a-8142-9340ec78bdb9 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Received event network-vif-unplugged-1e79672b-1776-4292-b861-3666e6b4dc69 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:30:56 compute-0 podman[149708]: 2025-10-09 16:30:56.829041066 +0000 UTC m=+0.060277236 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Oct 09 16:30:56 compute-0 podman[149709]: 2025-10-09 16:30:56.838610078 +0000 UTC m=+0.065072816 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 09 16:30:57 compute-0 nova_compute[117331]: 2025-10-09 16:30:57.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:57 compute-0 nova_compute[117331]: 2025-10-09 16:30:57.112 2 DEBUG nova.network.neutron [-] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:30:57 compute-0 nova_compute[117331]: 2025-10-09 16:30:57.626 2 INFO nova.compute.manager [-] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Took 2.15 seconds to deallocate network for instance.
Oct 09 16:30:57 compute-0 sshd-session[149540]: Connection closed by invalid user vagrant 36.224.53.32 port 37350 [preauth]
Oct 09 16:30:58 compute-0 nova_compute[117331]: 2025-10-09 16:30:58.152 2 DEBUG oslo_concurrency.lockutils [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:30:58 compute-0 nova_compute[117331]: 2025-10-09 16:30:58.153 2 DEBUG oslo_concurrency.lockutils [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:30:58 compute-0 nova_compute[117331]: 2025-10-09 16:30:58.206 2 DEBUG nova.compute.provider_tree [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:30:58 compute-0 nova_compute[117331]: 2025-10-09 16:30:58.479 2 DEBUG nova.compute.manager [req-d2f5e539-aeec-4712-842e-8ae009969ecf req-143b075b-da27-445b-a1c7-f99133357908 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 144cc4ab-6004-4896-8899-8860557341b0] Received event network-vif-deleted-1e79672b-1776-4292-b861-3666e6b4dc69 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:30:58 compute-0 nova_compute[117331]: 2025-10-09 16:30:58.716 2 DEBUG nova.scheduler.client.report [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:30:59 compute-0 nova_compute[117331]: 2025-10-09 16:30:59.227 2 DEBUG oslo_concurrency.lockutils [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.074s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:30:59 compute-0 nova_compute[117331]: 2025-10-09 16:30:59.267 2 INFO nova.scheduler.client.report [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Deleted allocations for instance 144cc4ab-6004-4896-8899-8860557341b0
Oct 09 16:30:59 compute-0 podman[127775]: time="2025-10-09T16:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:30:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:30:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3032 "" "Go-http-client/1.1"
Oct 09 16:30:59 compute-0 nova_compute[117331]: 2025-10-09 16:30:59.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:30:59 compute-0 sshd-session[149748]: Invalid user default from 36.224.53.32 port 46886
Oct 09 16:31:00 compute-0 nova_compute[117331]: 2025-10-09 16:31:00.313 2 DEBUG oslo_concurrency.lockutils [None req-875a7b71-7a49-4a86-94a6-106f12f47146 ea71b67fc37c45859db08bb5231d7a56 c9d397897ee84f2e91c7e33f6c4052a3 - - default default] Lock "144cc4ab-6004-4896-8899-8860557341b0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.657s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:31:01 compute-0 openstack_network_exporter[129925]: ERROR   16:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:31:01 compute-0 openstack_network_exporter[129925]: ERROR   16:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:31:01 compute-0 openstack_network_exporter[129925]: ERROR   16:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:31:01 compute-0 openstack_network_exporter[129925]: ERROR   16:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:31:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:31:01 compute-0 openstack_network_exporter[129925]: ERROR   16:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:31:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:31:02 compute-0 nova_compute[117331]: 2025-10-09 16:31:02.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:02 compute-0 sshd-session[149748]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:31:02 compute-0 sshd-session[149748]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:31:04 compute-0 sshd-session[149748]: Failed password for invalid user default from 36.224.53.32 port 46886 ssh2
Oct 09 16:31:04 compute-0 podman[149754]: 2025-10-09 16:31:04.874391869 +0000 UTC m=+0.094816954 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 09 16:31:04 compute-0 podman[149753]: 2025-10-09 16:31:04.874249465 +0000 UTC m=+0.094409092 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, distribution-scope=public, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_id=edpm, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9)
Oct 09 16:31:04 compute-0 nova_compute[117331]: 2025-10-09 16:31:04.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:05 compute-0 sshd-session[149748]: Connection closed by invalid user default 36.224.53.32 port 46886 [preauth]
Oct 09 16:31:07 compute-0 nova_compute[117331]: 2025-10-09 16:31:07.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:07 compute-0 sshd-session[149752]: Invalid user gitea from 36.224.53.32 port 55194
Oct 09 16:31:08 compute-0 nova_compute[117331]: 2025-10-09 16:31:08.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:08 compute-0 sshd-session[149752]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:31:08 compute-0 sshd-session[149752]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:31:09 compute-0 nova_compute[117331]: 2025-10-09 16:31:09.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:10 compute-0 sshd-session[149752]: Failed password for invalid user gitea from 36.224.53.32 port 55194 ssh2
Oct 09 16:31:11 compute-0 sshd-session[149752]: Connection closed by invalid user gitea 36.224.53.32 port 55194 [preauth]
Oct 09 16:31:12 compute-0 nova_compute[117331]: 2025-10-09 16:31:12.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:14 compute-0 nova_compute[117331]: 2025-10-09 16:31:14.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:16 compute-0 sshd-session[149802]: Invalid user kali from 36.224.53.32 port 34902
Oct 09 16:31:17 compute-0 nova_compute[117331]: 2025-10-09 16:31:17.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:17 compute-0 sshd-session[149802]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:31:17 compute-0 sshd-session[149802]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:31:18 compute-0 podman[149804]: 2025-10-09 16:31:18.829456743 +0000 UTC m=+0.056891577 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 09 16:31:19 compute-0 sshd-session[149802]: Failed password for invalid user kali from 36.224.53.32 port 34902 ssh2
Oct 09 16:31:19 compute-0 sshd-session[149802]: Connection closed by invalid user kali 36.224.53.32 port 34902 [preauth]
Oct 09 16:31:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:19.129 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:3a:49 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54b37568-476a-40a0-b545-fe5401f85653', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '036a4e356fb34effb6775ffe5bd9a19f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc641581-8c4b-4cef-982e-bb0cf11a52ba, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=01dee844-02ac-4caa-80f7-8a16019cbd9d) old=Port_Binding(mac=['fa:16:3e:50:3a:49'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54b37568-476a-40a0-b545-fe5401f85653', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '036a4e356fb34effb6775ffe5bd9a19f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:31:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:19.130 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 01dee844-02ac-4caa-80f7-8a16019cbd9d in datapath 54b37568-476a-40a0-b545-fe5401f85653 updated
Oct 09 16:31:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:19.132 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 54b37568-476a-40a0-b545-fe5401f85653, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:31:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:19.132 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[0eaab97b-0dec-46af-85fd-c259501febcf]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:19 compute-0 nova_compute[117331]: 2025-10-09 16:31:19.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:21 compute-0 podman[149827]: 2025-10-09 16:31:21.873628075 +0000 UTC m=+0.095797733 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 09 16:31:22 compute-0 nova_compute[117331]: 2025-10-09 16:31:22.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:22 compute-0 nova_compute[117331]: 2025-10-09 16:31:22.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:31:23 compute-0 sshd-session[149825]: Invalid user apache from 36.224.53.32 port 43492
Oct 09 16:31:24 compute-0 sshd-session[149825]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:31:24 compute-0 sshd-session[149825]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:31:24 compute-0 nova_compute[117331]: 2025-10-09 16:31:24.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:26 compute-0 nova_compute[117331]: 2025-10-09 16:31:26.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:31:26 compute-0 nova_compute[117331]: 2025-10-09 16:31:26.306 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:31:26 compute-0 sshd-session[149825]: Failed password for invalid user apache from 36.224.53.32 port 43492 ssh2
Oct 09 16:31:27 compute-0 nova_compute[117331]: 2025-10-09 16:31:27.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:27 compute-0 nova_compute[117331]: 2025-10-09 16:31:27.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:31:27 compute-0 sshd-session[149825]: Connection closed by invalid user apache 36.224.53.32 port 43492 [preauth]
Oct 09 16:31:27 compute-0 podman[149853]: 2025-10-09 16:31:27.831754005 +0000 UTC m=+0.056628930 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct 09 16:31:27 compute-0 podman[149854]: 2025-10-09 16:31:27.877539388 +0000 UTC m=+0.096485854 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid)
Oct 09 16:31:28 compute-0 nova_compute[117331]: 2025-10-09 16:31:28.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:31:28 compute-0 nova_compute[117331]: 2025-10-09 16:31:28.821 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:31:28 compute-0 nova_compute[117331]: 2025-10-09 16:31:28.822 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:31:28 compute-0 nova_compute[117331]: 2025-10-09 16:31:28.822 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:31:28 compute-0 nova_compute[117331]: 2025-10-09 16:31:28.822 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:31:29 compute-0 nova_compute[117331]: 2025-10-09 16:31:29.000 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:31:29 compute-0 nova_compute[117331]: 2025-10-09 16:31:29.001 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:31:29 compute-0 nova_compute[117331]: 2025-10-09 16:31:29.023 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.022s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:31:29 compute-0 nova_compute[117331]: 2025-10-09 16:31:29.024 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6142MB free_disk=73.25733947753906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:31:29 compute-0 nova_compute[117331]: 2025-10-09 16:31:29.025 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:31:29 compute-0 nova_compute[117331]: 2025-10-09 16:31:29.025 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:31:29 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:29.628 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:09:2b 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-fb38d13f-f011-4a1a-94a7-194066f7105e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb38d13f-f011-4a1a-94a7-194066f7105e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d67ac3076434a4582e5db1ca7d043ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9bcfbb3d-0abd-45a1-bc0a-f8f1c590eb87, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=29f47ef7-4539-48ee-9997-45ec55c3f208) old=Port_Binding(mac=['fa:16:3e:76:09:2b'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-fb38d13f-f011-4a1a-94a7-194066f7105e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb38d13f-f011-4a1a-94a7-194066f7105e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d67ac3076434a4582e5db1ca7d043ff', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:31:29 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:29.628 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 29f47ef7-4539-48ee-9997-45ec55c3f208 in datapath fb38d13f-f011-4a1a-94a7-194066f7105e updated
Oct 09 16:31:29 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:29.629 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fb38d13f-f011-4a1a-94a7-194066f7105e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:31:29 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:29.630 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[96e499f0-699c-407e-a153-dd8b5019f3f3]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:29 compute-0 podman[127775]: time="2025-10-09T16:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:31:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:31:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3026 "" "Go-http-client/1.1"
Oct 09 16:31:29 compute-0 nova_compute[117331]: 2025-10-09 16:31:29.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:30 compute-0 nova_compute[117331]: 2025-10-09 16:31:30.063 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:31:30 compute-0 nova_compute[117331]: 2025-10-09 16:31:30.063 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:31:29 up 40 min,  0 user,  load average: 0.63, 0.64, 0.48\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:31:30 compute-0 nova_compute[117331]: 2025-10-09 16:31:30.082 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing inventories for resource provider 593051b8-2000-437f-a915-2616fc8b1671 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 09 16:31:30 compute-0 nova_compute[117331]: 2025-10-09 16:31:30.098 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating ProviderTree inventory for provider 593051b8-2000-437f-a915-2616fc8b1671 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 09 16:31:30 compute-0 nova_compute[117331]: 2025-10-09 16:31:30.099 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating inventory in ProviderTree for provider 593051b8-2000-437f-a915-2616fc8b1671 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 09 16:31:30 compute-0 nova_compute[117331]: 2025-10-09 16:31:30.113 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing aggregate associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 09 16:31:30 compute-0 nova_compute[117331]: 2025-10-09 16:31:30.139 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing trait associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, traits: HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ARCH_X86_64,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_TIS,HW_ARCH_X86_64,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOUND_MODEL_AC97,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_RESCUE_BFV,COMPUTE_SOUND_MODEL_SB16,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 09 16:31:30 compute-0 nova_compute[117331]: 2025-10-09 16:31:30.156 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:31:30 compute-0 nova_compute[117331]: 2025-10-09 16:31:30.663 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:31:31 compute-0 nova_compute[117331]: 2025-10-09 16:31:31.173 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:31:31 compute-0 nova_compute[117331]: 2025-10-09 16:31:31.173 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.148s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:31:31 compute-0 openstack_network_exporter[129925]: ERROR   16:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:31:31 compute-0 openstack_network_exporter[129925]: ERROR   16:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:31:31 compute-0 openstack_network_exporter[129925]: ERROR   16:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:31:31 compute-0 openstack_network_exporter[129925]: ERROR   16:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:31:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:31:31 compute-0 openstack_network_exporter[129925]: ERROR   16:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:31:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:31:32 compute-0 nova_compute[117331]: 2025-10-09 16:31:32.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:33 compute-0 nova_compute[117331]: 2025-10-09 16:31:33.170 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:31:33 compute-0 nova_compute[117331]: 2025-10-09 16:31:33.170 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:31:33 compute-0 nova_compute[117331]: 2025-10-09 16:31:33.170 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:31:34 compute-0 sshd-session[149889]: Invalid user devopsuser from 36.224.53.32 port 53530
Oct 09 16:31:34 compute-0 nova_compute[117331]: 2025-10-09 16:31:34.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:31:34 compute-0 sshd-session[149889]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:31:34 compute-0 sshd-session[149889]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:31:34 compute-0 nova_compute[117331]: 2025-10-09 16:31:34.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:35.324 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:31:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:35.325 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:31:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:35.325 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:31:35 compute-0 podman[149895]: 2025-10-09 16:31:35.825972606 +0000 UTC m=+0.055704709 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 09 16:31:35 compute-0 podman[149896]: 2025-10-09 16:31:35.855530895 +0000 UTC m=+0.080000361 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4)
Oct 09 16:31:36 compute-0 sshd-session[149889]: Failed password for invalid user devopsuser from 36.224.53.32 port 53530 ssh2
Oct 09 16:31:37 compute-0 nova_compute[117331]: 2025-10-09 16:31:37.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:37 compute-0 sshd-session[149889]: Connection closed by invalid user devopsuser 36.224.53.32 port 53530 [preauth]
Oct 09 16:31:38 compute-0 sshd-session[149893]: Invalid user odoo18 from 36.224.53.32 port 33018
Oct 09 16:31:39 compute-0 nova_compute[117331]: 2025-10-09 16:31:39.362 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "4c51c9df-777a-497d-bf84-a8001b45a4f0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:31:39 compute-0 nova_compute[117331]: 2025-10-09 16:31:39.362 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:31:39 compute-0 sshd-session[149893]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:31:39 compute-0 sshd-session[149893]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:31:39 compute-0 nova_compute[117331]: 2025-10-09 16:31:39.879 2 DEBUG nova.compute.manager [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:31:40 compute-0 nova_compute[117331]: 2025-10-09 16:31:40.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:40 compute-0 nova_compute[117331]: 2025-10-09 16:31:40.444 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:31:40 compute-0 nova_compute[117331]: 2025-10-09 16:31:40.445 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:31:40 compute-0 nova_compute[117331]: 2025-10-09 16:31:40.450 2 DEBUG nova.virt.hardware [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:31:40 compute-0 nova_compute[117331]: 2025-10-09 16:31:40.451 2 INFO nova.compute.claims [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:31:41 compute-0 nova_compute[117331]: 2025-10-09 16:31:41.517 2 DEBUG nova.compute.provider_tree [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:31:41 compute-0 sshd-session[149893]: Failed password for invalid user odoo18 from 36.224.53.32 port 33018 ssh2
Oct 09 16:31:42 compute-0 nova_compute[117331]: 2025-10-09 16:31:42.038 2 DEBUG nova.scheduler.client.report [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:31:42 compute-0 nova_compute[117331]: 2025-10-09 16:31:42.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:42 compute-0 ovn_controller[19752]: 2025-10-09T16:31:42Z|00202|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 09 16:31:42 compute-0 nova_compute[117331]: 2025-10-09 16:31:42.552 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.107s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:31:42 compute-0 nova_compute[117331]: 2025-10-09 16:31:42.554 2 DEBUG nova.compute.manager [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:31:43 compute-0 nova_compute[117331]: 2025-10-09 16:31:43.066 2 DEBUG nova.compute.manager [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:31:43 compute-0 nova_compute[117331]: 2025-10-09 16:31:43.066 2 DEBUG nova.network.neutron [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:31:43 compute-0 nova_compute[117331]: 2025-10-09 16:31:43.067 2 WARNING neutronclient.v2_0.client [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:31:43 compute-0 nova_compute[117331]: 2025-10-09 16:31:43.067 2 WARNING neutronclient.v2_0.client [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:31:43 compute-0 nova_compute[117331]: 2025-10-09 16:31:43.576 2 INFO nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:31:43 compute-0 sshd-session[149893]: Connection closed by invalid user odoo18 36.224.53.32 port 33018 [preauth]
Oct 09 16:31:44 compute-0 nova_compute[117331]: 2025-10-09 16:31:44.083 2 DEBUG nova.compute.manager [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:31:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:44.165 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:31:44 compute-0 nova_compute[117331]: 2025-10-09 16:31:44.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:44.168 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:31:44 compute-0 nova_compute[117331]: 2025-10-09 16:31:44.392 2 DEBUG nova.network.neutron [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Successfully created port: 783ccf1e-b85c-4420-b3e9-705ddf5495d3 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.103 2 DEBUG nova.compute.manager [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.106 2 DEBUG nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.106 2 INFO nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Creating image(s)
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.108 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "/var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.108 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "/var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.110 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "/var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.111 2 DEBUG oslo_utils.imageutils.format_inspector [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.117 2 DEBUG oslo_utils.imageutils.format_inspector [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.120 2 DEBUG oslo_concurrency.processutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.179 2 DEBUG oslo_concurrency.processutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.180 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.181 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.182 2 DEBUG oslo_utils.imageutils.format_inspector [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.187 2 DEBUG oslo_utils.imageutils.format_inspector [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.187 2 DEBUG oslo_concurrency.processutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.246 2 DEBUG oslo_concurrency.processutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.247 2 DEBUG oslo_concurrency.processutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.288 2 DEBUG oslo_concurrency.processutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.289 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.108s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.289 2 DEBUG oslo_concurrency.processutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.352 2 DEBUG oslo_concurrency.processutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.353 2 DEBUG nova.virt.disk.api [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Checking if we can resize image /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.353 2 DEBUG oslo_concurrency.processutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.408 2 DEBUG oslo_concurrency.processutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.409 2 DEBUG nova.virt.disk.api [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Cannot resize image /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.409 2 DEBUG nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.409 2 DEBUG nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Ensure instance console log exists: /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.410 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.410 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.410 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.683 2 DEBUG nova.network.neutron [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Successfully updated port: 783ccf1e-b85c-4420-b3e9-705ddf5495d3 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.743 2 DEBUG nova.compute.manager [req-3c23e8f0-478a-424d-ae52-94c6e500888f req-44a61115-d37e-482a-a05d-7a71c7ea7988 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received event network-changed-783ccf1e-b85c-4420-b3e9-705ddf5495d3 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.744 2 DEBUG nova.compute.manager [req-3c23e8f0-478a-424d-ae52-94c6e500888f req-44a61115-d37e-482a-a05d-7a71c7ea7988 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Refreshing instance network info cache due to event network-changed-783ccf1e-b85c-4420-b3e9-705ddf5495d3. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.744 2 DEBUG oslo_concurrency.lockutils [req-3c23e8f0-478a-424d-ae52-94c6e500888f req-44a61115-d37e-482a-a05d-7a71c7ea7988 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-4c51c9df-777a-497d-bf84-a8001b45a4f0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.744 2 DEBUG oslo_concurrency.lockutils [req-3c23e8f0-478a-424d-ae52-94c6e500888f req-44a61115-d37e-482a-a05d-7a71c7ea7988 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-4c51c9df-777a-497d-bf84-a8001b45a4f0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:31:45 compute-0 nova_compute[117331]: 2025-10-09 16:31:45.744 2 DEBUG nova.network.neutron [req-3c23e8f0-478a-424d-ae52-94c6e500888f req-44a61115-d37e-482a-a05d-7a71c7ea7988 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Refreshing network info cache for port 783ccf1e-b85c-4420-b3e9-705ddf5495d3 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:31:46 compute-0 nova_compute[117331]: 2025-10-09 16:31:46.193 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "refresh_cache-4c51c9df-777a-497d-bf84-a8001b45a4f0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:31:46 compute-0 nova_compute[117331]: 2025-10-09 16:31:46.250 2 WARNING neutronclient.v2_0.client [req-3c23e8f0-478a-424d-ae52-94c6e500888f req-44a61115-d37e-482a-a05d-7a71c7ea7988 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:31:46 compute-0 nova_compute[117331]: 2025-10-09 16:31:46.345 2 DEBUG nova.network.neutron [req-3c23e8f0-478a-424d-ae52-94c6e500888f req-44a61115-d37e-482a-a05d-7a71c7ea7988 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:31:47 compute-0 nova_compute[117331]: 2025-10-09 16:31:47.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:47 compute-0 nova_compute[117331]: 2025-10-09 16:31:47.112 2 DEBUG nova.network.neutron [req-3c23e8f0-478a-424d-ae52-94c6e500888f req-44a61115-d37e-482a-a05d-7a71c7ea7988 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:31:47 compute-0 nova_compute[117331]: 2025-10-09 16:31:47.620 2 DEBUG oslo_concurrency.lockutils [req-3c23e8f0-478a-424d-ae52-94c6e500888f req-44a61115-d37e-482a-a05d-7a71c7ea7988 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-4c51c9df-777a-497d-bf84-a8001b45a4f0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:31:47 compute-0 nova_compute[117331]: 2025-10-09 16:31:47.620 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquired lock "refresh_cache-4c51c9df-777a-497d-bf84-a8001b45a4f0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:31:47 compute-0 nova_compute[117331]: 2025-10-09 16:31:47.621 2 DEBUG nova.network.neutron [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:31:49 compute-0 nova_compute[117331]: 2025-10-09 16:31:49.117 2 DEBUG nova.network.neutron [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:31:49 compute-0 podman[149958]: 2025-10-09 16:31:49.828549903 +0000 UTC m=+0.060660508 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 09 16:31:50 compute-0 nova_compute[117331]: 2025-10-09 16:31:50.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:50 compute-0 nova_compute[117331]: 2025-10-09 16:31:50.167 2 WARNING neutronclient.v2_0.client [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:31:50 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:50.170 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:31:50 compute-0 sshd-session[149941]: Invalid user db2inst1 from 36.224.53.32 port 43576
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.225 2 DEBUG nova.network.neutron [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Updating instance_info_cache with network_info: [{"id": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "address": "fa:16:3e:95:fd:24", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap783ccf1e-b8", "ovs_interfaceid": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:31:51 compute-0 sshd-session[149941]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:31:51 compute-0 sshd-session[149941]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.732 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Releasing lock "refresh_cache-4c51c9df-777a-497d-bf84-a8001b45a4f0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.733 2 DEBUG nova.compute.manager [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Instance network_info: |[{"id": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "address": "fa:16:3e:95:fd:24", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap783ccf1e-b8", "ovs_interfaceid": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.736 2 DEBUG nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Start _get_guest_xml network_info=[{"id": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "address": "fa:16:3e:95:fd:24", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap783ccf1e-b8", "ovs_interfaceid": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.741 2 WARNING nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.743 2 DEBUG nova.virt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteWorkloadBalanceStrategy-server-445686334', uuid='4c51c9df-777a-497d-bf84-a8001b45a4f0'), owner=OwnerMeta(userid='1c793380a6e945d69dacfd07f1f156f8', username='tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin', projectid='6d67ac3076434a4582e5db1ca7d043ff', projectname='tempest-TestExecuteWorkloadBalanceStrategy-2100042169'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "address": "fa:16:3e:95:fd:24", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap783ccf1e-b8", "ovs_interfaceid": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760027511.7431946) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.751 2 DEBUG nova.virt.libvirt.host [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.752 2 DEBUG nova.virt.libvirt.host [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.756 2 DEBUG nova.virt.libvirt.host [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.756 2 DEBUG nova.virt.libvirt.host [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.757 2 DEBUG nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.757 2 DEBUG nova.virt.hardware [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.758 2 DEBUG nova.virt.hardware [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.758 2 DEBUG nova.virt.hardware [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.758 2 DEBUG nova.virt.hardware [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.759 2 DEBUG nova.virt.hardware [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.759 2 DEBUG nova.virt.hardware [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.759 2 DEBUG nova.virt.hardware [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.760 2 DEBUG nova.virt.hardware [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.760 2 DEBUG nova.virt.hardware [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.760 2 DEBUG nova.virt.hardware [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.760 2 DEBUG nova.virt.hardware [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.765 2 DEBUG nova.virt.libvirt.vif [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:31:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-445686334',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-445686334',id=22,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d67ac3076434a4582e5db1ca7d043ff',ramdisk_id='',reservation_id='r-0ja4884i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:31:44Z,user_data=None,user_id='1c793380a6e945d69dacfd07f1f156f8',uuid=4c51c9df-777a-497d-bf84-a8001b45a4f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "address": "fa:16:3e:95:fd:24", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap783ccf1e-b8", "ovs_interfaceid": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.765 2 DEBUG nova.network.os_vif_util [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converting VIF {"id": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "address": "fa:16:3e:95:fd:24", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap783ccf1e-b8", "ovs_interfaceid": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.766 2 DEBUG nova.network.os_vif_util [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:fd:24,bridge_name='br-int',has_traffic_filtering=True,id=783ccf1e-b85c-4420-b3e9-705ddf5495d3,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap783ccf1e-b8') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:31:51 compute-0 nova_compute[117331]: 2025-10-09 16:31:51.767 2 DEBUG nova.objects.instance [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lazy-loading 'pci_devices' on Instance uuid 4c51c9df-777a-497d-bf84-a8001b45a4f0 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.275 2 DEBUG nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:31:52 compute-0 nova_compute[117331]:   <uuid>4c51c9df-777a-497d-bf84-a8001b45a4f0</uuid>
Oct 09 16:31:52 compute-0 nova_compute[117331]:   <name>instance-00000016</name>
Oct 09 16:31:52 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:31:52 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:31:52 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-445686334</nova:name>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:31:51</nova:creationTime>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:31:52 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:31:52 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:31:52 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:31:52 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:31:52 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:31:52 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:31:52 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:31:52 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:31:52 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:31:52 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:31:52 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:31:52 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:31:52 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:31:52 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:31:52 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:31:52 compute-0 nova_compute[117331]:         <nova:user uuid="1c793380a6e945d69dacfd07f1f156f8">tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin</nova:user>
Oct 09 16:31:52 compute-0 nova_compute[117331]:         <nova:project uuid="6d67ac3076434a4582e5db1ca7d043ff">tempest-TestExecuteWorkloadBalanceStrategy-2100042169</nova:project>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:31:52 compute-0 nova_compute[117331]:         <nova:port uuid="783ccf1e-b85c-4420-b3e9-705ddf5495d3">
Oct 09 16:31:52 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:31:52 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:31:52 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <system>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <entry name="serial">4c51c9df-777a-497d-bf84-a8001b45a4f0</entry>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <entry name="uuid">4c51c9df-777a-497d-bf84-a8001b45a4f0</entry>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     </system>
Oct 09 16:31:52 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:31:52 compute-0 nova_compute[117331]:   <os>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:   </os>
Oct 09 16:31:52 compute-0 nova_compute[117331]:   <features>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:   </features>
Oct 09 16:31:52 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:31:52 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:31:52 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk.config"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:95:fd:24"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <target dev="tap783ccf1e-b8"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/console.log" append="off"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <video>
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     </video>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:31:52 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:31:52 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:31:52 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:31:52 compute-0 nova_compute[117331]: </domain>
Oct 09 16:31:52 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.276 2 DEBUG nova.compute.manager [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Preparing to wait for external event network-vif-plugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.277 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.277 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.277 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.278 2 DEBUG nova.virt.libvirt.vif [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:31:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-445686334',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-445686334',id=22,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d67ac3076434a4582e5db1ca7d043ff',ramdisk_id='',reservation_id='r-0ja4884i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:31:44Z,user_data=None,user_id='1c793380a6e945d69dacfd07f1f156f8',uuid=4c51c9df-777a-497d-bf84-a8001b45a4f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "address": "fa:16:3e:95:fd:24", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap783ccf1e-b8", "ovs_interfaceid": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.278 2 DEBUG nova.network.os_vif_util [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converting VIF {"id": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "address": "fa:16:3e:95:fd:24", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap783ccf1e-b8", "ovs_interfaceid": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.279 2 DEBUG nova.network.os_vif_util [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:fd:24,bridge_name='br-int',has_traffic_filtering=True,id=783ccf1e-b85c-4420-b3e9-705ddf5495d3,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap783ccf1e-b8') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.279 2 DEBUG os_vif [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:fd:24,bridge_name='br-int',has_traffic_filtering=True,id=783ccf1e-b85c-4420-b3e9-705ddf5495d3,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap783ccf1e-b8') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.280 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.280 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.281 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'a36f9769-b472-5edf-b9b9-f54b4746dc94', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.288 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap783ccf1e-b8, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.288 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap783ccf1e-b8, col_values=(('qos', UUID('e102b0c8-6b9c-4a2b-9eda-b0d27ccea77b')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.288 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap783ccf1e-b8, col_values=(('external_ids', {'iface-id': '783ccf1e-b85c-4420-b3e9-705ddf5495d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:fd:24', 'vm-uuid': '4c51c9df-777a-497d-bf84-a8001b45a4f0'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:52 compute-0 NetworkManager[1028]: <info>  [1760027512.2917] manager: (tap783ccf1e-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:52 compute-0 nova_compute[117331]: 2025-10-09 16:31:52.297 2 INFO os_vif [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:fd:24,bridge_name='br-int',has_traffic_filtering=True,id=783ccf1e-b85c-4420-b3e9-705ddf5495d3,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap783ccf1e-b8')
Oct 09 16:31:52 compute-0 podman[149981]: 2025-10-09 16:31:52.833473539 +0000 UTC m=+0.062254598 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:31:53 compute-0 sshd-session[149941]: Failed password for invalid user db2inst1 from 36.224.53.32 port 43576 ssh2
Oct 09 16:31:53 compute-0 sshd-session[149941]: Connection closed by invalid user db2inst1 36.224.53.32 port 43576 [preauth]
Oct 09 16:31:53 compute-0 nova_compute[117331]: 2025-10-09 16:31:53.835 2 DEBUG nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:31:53 compute-0 nova_compute[117331]: 2025-10-09 16:31:53.835 2 DEBUG nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:31:53 compute-0 nova_compute[117331]: 2025-10-09 16:31:53.835 2 DEBUG nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] No VIF found with MAC fa:16:3e:95:fd:24, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:31:53 compute-0 nova_compute[117331]: 2025-10-09 16:31:53.836 2 INFO nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Using config drive
Oct 09 16:31:54 compute-0 nova_compute[117331]: 2025-10-09 16:31:54.346 2 WARNING neutronclient.v2_0.client [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:31:54 compute-0 nova_compute[117331]: 2025-10-09 16:31:54.531 2 INFO nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Creating config drive at /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk.config
Oct 09 16:31:54 compute-0 nova_compute[117331]: 2025-10-09 16:31:54.537 2 DEBUG oslo_concurrency.processutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpedjo6nh5 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:31:54 compute-0 nova_compute[117331]: 2025-10-09 16:31:54.686 2 DEBUG oslo_concurrency.processutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpedjo6nh5" returned: 0 in 0.149s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:31:54 compute-0 kernel: tap783ccf1e-b8: entered promiscuous mode
Oct 09 16:31:54 compute-0 NetworkManager[1028]: <info>  [1760027514.7565] manager: (tap783ccf1e-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Oct 09 16:31:54 compute-0 ovn_controller[19752]: 2025-10-09T16:31:54Z|00203|binding|INFO|Claiming lport 783ccf1e-b85c-4420-b3e9-705ddf5495d3 for this chassis.
Oct 09 16:31:54 compute-0 ovn_controller[19752]: 2025-10-09T16:31:54Z|00204|binding|INFO|783ccf1e-b85c-4420-b3e9-705ddf5495d3: Claiming fa:16:3e:95:fd:24 10.100.0.10
Oct 09 16:31:54 compute-0 nova_compute[117331]: 2025-10-09 16:31:54.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:54 compute-0 nova_compute[117331]: 2025-10-09 16:31:54.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:54.772 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:fd:24 10.100.0.10'], port_security=['fa:16:3e:95:fd:24 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4c51c9df-777a-497d-bf84-a8001b45a4f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54b37568-476a-40a0-b545-fe5401f85653', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d67ac3076434a4582e5db1ca7d043ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '94abd997-3903-47b3-abe3-e283a4232c96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc641581-8c4b-4cef-982e-bb0cf11a52ba, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=783ccf1e-b85c-4420-b3e9-705ddf5495d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:31:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:54.773 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 783ccf1e-b85c-4420-b3e9-705ddf5495d3 in datapath 54b37568-476a-40a0-b545-fe5401f85653 bound to our chassis
Oct 09 16:31:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:54.774 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 54b37568-476a-40a0-b545-fe5401f85653
Oct 09 16:31:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:54.787 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[fd5d5e60-7c21-40c2-b5fb-bc1ec99fb770]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:54.788 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap54b37568-41 in ovnmeta-54b37568-476a-40a0-b545-fe5401f85653 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 09 16:31:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:54.790 139687 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap54b37568-40 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 09 16:31:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:54.790 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[5419830f-fba6-4820-b753-44a7651267ab]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:54.790 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[51f1f45a-b21b-4e1f-88e6-63d0e15e5bfe]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:54 compute-0 systemd-machined[77487]: New machine qemu-16-instance-00000016.
Oct 09 16:31:54 compute-0 systemd-udevd[150027]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:31:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:54.801 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[d37f9b5d-d7f7-4237-b7de-c32349509646]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:54 compute-0 NetworkManager[1028]: <info>  [1760027514.8159] device (tap783ccf1e-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:31:54 compute-0 NetworkManager[1028]: <info>  [1760027514.8168] device (tap783ccf1e-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:31:54 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-00000016.
Oct 09 16:31:54 compute-0 nova_compute[117331]: 2025-10-09 16:31:54.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:54 compute-0 ovn_controller[19752]: 2025-10-09T16:31:54Z|00205|binding|INFO|Setting lport 783ccf1e-b85c-4420-b3e9-705ddf5495d3 ovn-installed in OVS
Oct 09 16:31:54 compute-0 ovn_controller[19752]: 2025-10-09T16:31:54Z|00206|binding|INFO|Setting lport 783ccf1e-b85c-4420-b3e9-705ddf5495d3 up in Southbound
Oct 09 16:31:54 compute-0 nova_compute[117331]: 2025-10-09 16:31:54.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:54.825 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[f11f6bb8-966a-4f09-9440-72b23605f252]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:54.859 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[82df4ffc-5e26-458a-9484-23a0aa208f4c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:54.863 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[5e1d62bf-1038-4043-a4ae-7712905781ae]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:54 compute-0 NetworkManager[1028]: <info>  [1760027514.8647] manager: (tap54b37568-40): new Veth device (/org/freedesktop/NetworkManager/Devices/76)
Oct 09 16:31:54 compute-0 systemd-udevd[150030]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:31:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:54.901 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[8e3f33d6-d7b4-4576-a740-204429bf2f44]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:54.903 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[3cbb5457-9967-41e6-9298-2c390b0452ef]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:54 compute-0 NetworkManager[1028]: <info>  [1760027514.9235] device (tap54b37568-40): carrier: link connected
Oct 09 16:31:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:54.930 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[92da3de7-b74d-4810-a713-f4fd9fc9f136]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:54.947 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[bb1c29fe-9ecb-49ba-9fda-e2287e53ae32]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54b37568-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:3a:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 245852, 'reachable_time': 39101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 150058, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:54.964 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5ea2e5-0064-40c5-975c-21b8e0d22ac6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe50:3a49'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 245852, 'tstamp': 245852}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 150059, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:54 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:54.980 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[e187cbcb-b237-4aa1-a828-98beb577cccf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54b37568-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:3a:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 245852, 'reachable_time': 39101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 150060, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:55.010 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[959a25f7-f573-4564-a32b-fc3749be6b3c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:55.072 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[2327bc10-847d-45ef-a3ef-f9b852e890b6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:55.074 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54b37568-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:55.074 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:55.074 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54b37568-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:31:55 compute-0 kernel: tap54b37568-40: entered promiscuous mode
Oct 09 16:31:55 compute-0 nova_compute[117331]: 2025-10-09 16:31:55.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:55 compute-0 NetworkManager[1028]: <info>  [1760027515.0767] manager: (tap54b37568-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Oct 09 16:31:55 compute-0 nova_compute[117331]: 2025-10-09 16:31:55.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:55.080 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap54b37568-40, col_values=(('external_ids', {'iface-id': '01dee844-02ac-4caa-80f7-8a16019cbd9d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:31:55 compute-0 nova_compute[117331]: 2025-10-09 16:31:55.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:55 compute-0 nova_compute[117331]: 2025-10-09 16:31:55.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:55 compute-0 ovn_controller[19752]: 2025-10-09T16:31:55Z|00207|binding|INFO|Releasing lport 01dee844-02ac-4caa-80f7-8a16019cbd9d from this chassis (sb_readonly=0)
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:55.083 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[671b8bd9-21dc-4271-b736-4ec91525788f]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:55.084 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:55.084 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:55.084 28613 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 54b37568-476a-40a0-b545-fe5401f85653 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:55.084 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:55.084 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2f655e-122c-41b7-8983-f9743a9d8d21]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:55.085 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:55.085 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[dfaaf23d-e298-4b88-8ba7-517cd0117bbf]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:55.085 28613 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: global
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     log         /dev/log local0 debug
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     log-tag     haproxy-metadata-proxy-54b37568-476a-40a0-b545-fe5401f85653
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     user        root
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     group       root
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     maxconn     1024
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     pidfile     /var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     daemon
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: defaults
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     log global
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     mode http
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     option httplog
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     option dontlognull
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     option http-server-close
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     option forwardfor
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     retries                 3
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     timeout http-request    30s
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     timeout connect         30s
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     timeout client          32s
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     timeout server          32s
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     timeout http-keep-alive 30s
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: listen listener
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     bind 169.254.169.254:80
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:     http-request add-header X-OVN-Network-ID 54b37568-476a-40a0-b545-fe5401f85653
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 09 16:31:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:31:55.086 28613 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'env', 'PROCESS_TAG=haproxy-54b37568-476a-40a0-b545-fe5401f85653', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/54b37568-476a-40a0-b545-fe5401f85653.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 09 16:31:55 compute-0 nova_compute[117331]: 2025-10-09 16:31:55.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:55 compute-0 nova_compute[117331]: 2025-10-09 16:31:55.232 2 DEBUG nova.compute.manager [req-3b2a31a6-8756-4e65-a95f-7835f316608b req-96856a80-675b-4a59-bbae-8756a0387231 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received event network-vif-plugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:31:55 compute-0 nova_compute[117331]: 2025-10-09 16:31:55.233 2 DEBUG oslo_concurrency.lockutils [req-3b2a31a6-8756-4e65-a95f-7835f316608b req-96856a80-675b-4a59-bbae-8756a0387231 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:31:55 compute-0 nova_compute[117331]: 2025-10-09 16:31:55.234 2 DEBUG oslo_concurrency.lockutils [req-3b2a31a6-8756-4e65-a95f-7835f316608b req-96856a80-675b-4a59-bbae-8756a0387231 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:31:55 compute-0 nova_compute[117331]: 2025-10-09 16:31:55.235 2 DEBUG oslo_concurrency.lockutils [req-3b2a31a6-8756-4e65-a95f-7835f316608b req-96856a80-675b-4a59-bbae-8756a0387231 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:31:55 compute-0 nova_compute[117331]: 2025-10-09 16:31:55.235 2 DEBUG nova.compute.manager [req-3b2a31a6-8756-4e65-a95f-7835f316608b req-96856a80-675b-4a59-bbae-8756a0387231 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Processing event network-vif-plugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:31:55 compute-0 podman[150097]: 2025-10-09 16:31:55.532023196 +0000 UTC m=+0.063929841 container create cda4fdce59ea0b46106250ef11fdeea8d0cdca852ffb79593ac2c806a6408e2c (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:31:55 compute-0 systemd[1]: Started libpod-conmon-cda4fdce59ea0b46106250ef11fdeea8d0cdca852ffb79593ac2c806a6408e2c.scope.
Oct 09 16:31:55 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e33a5bff9a3786cb3709bb5bf26926b733d50a17ec3d7b6ae96c1afcf8c70d8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 16:31:55 compute-0 podman[150097]: 2025-10-09 16:31:55.499656149 +0000 UTC m=+0.031562844 image pull 4e15c189a95c5dcd1203c76fe304de5c8f8616da9a258ca168cc54a517f49b87 38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 09 16:31:55 compute-0 podman[150097]: 2025-10-09 16:31:55.600226122 +0000 UTC m=+0.132132787 container init cda4fdce59ea0b46106250ef11fdeea8d0cdca852ffb79593ac2c806a6408e2c (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS)
Oct 09 16:31:55 compute-0 podman[150097]: 2025-10-09 16:31:55.605298793 +0000 UTC m=+0.137205438 container start cda4fdce59ea0b46106250ef11fdeea8d0cdca852ffb79593ac2c806a6408e2c (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251007)
Oct 09 16:31:55 compute-0 neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653[150112]: [NOTICE]   (150116) : New worker (150118) forked
Oct 09 16:31:55 compute-0 neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653[150112]: [NOTICE]   (150116) : Loading success.
Oct 09 16:31:55 compute-0 nova_compute[117331]: 2025-10-09 16:31:55.785 2 DEBUG nova.compute.manager [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:31:55 compute-0 nova_compute[117331]: 2025-10-09 16:31:55.790 2 DEBUG nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:31:55 compute-0 nova_compute[117331]: 2025-10-09 16:31:55.794 2 INFO nova.virt.libvirt.driver [-] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Instance spawned successfully.
Oct 09 16:31:55 compute-0 nova_compute[117331]: 2025-10-09 16:31:55.794 2 DEBUG nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:31:56 compute-0 nova_compute[117331]: 2025-10-09 16:31:56.308 2 DEBUG nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:31:56 compute-0 nova_compute[117331]: 2025-10-09 16:31:56.308 2 DEBUG nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:31:56 compute-0 nova_compute[117331]: 2025-10-09 16:31:56.309 2 DEBUG nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:31:56 compute-0 nova_compute[117331]: 2025-10-09 16:31:56.309 2 DEBUG nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:31:56 compute-0 nova_compute[117331]: 2025-10-09 16:31:56.309 2 DEBUG nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:31:56 compute-0 nova_compute[117331]: 2025-10-09 16:31:56.310 2 DEBUG nova.virt.libvirt.driver [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:31:56 compute-0 nova_compute[117331]: 2025-10-09 16:31:56.820 2 INFO nova.compute.manager [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Took 11.72 seconds to spawn the instance on the hypervisor.
Oct 09 16:31:56 compute-0 nova_compute[117331]: 2025-10-09 16:31:56.820 2 DEBUG nova.compute.manager [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:31:57 compute-0 sshd-session[150005]: Invalid user steam from 36.224.53.32 port 54582
Oct 09 16:31:57 compute-0 nova_compute[117331]: 2025-10-09 16:31:57.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:57 compute-0 nova_compute[117331]: 2025-10-09 16:31:57.280 2 DEBUG nova.compute.manager [req-288306c9-e94a-4742-8887-233b20c07dbc req-f75b6160-038b-4951-9cc7-d0e5421f952f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received event network-vif-plugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:31:57 compute-0 nova_compute[117331]: 2025-10-09 16:31:57.281 2 DEBUG oslo_concurrency.lockutils [req-288306c9-e94a-4742-8887-233b20c07dbc req-f75b6160-038b-4951-9cc7-d0e5421f952f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:31:57 compute-0 nova_compute[117331]: 2025-10-09 16:31:57.281 2 DEBUG oslo_concurrency.lockutils [req-288306c9-e94a-4742-8887-233b20c07dbc req-f75b6160-038b-4951-9cc7-d0e5421f952f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:31:57 compute-0 nova_compute[117331]: 2025-10-09 16:31:57.281 2 DEBUG oslo_concurrency.lockutils [req-288306c9-e94a-4742-8887-233b20c07dbc req-f75b6160-038b-4951-9cc7-d0e5421f952f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:31:57 compute-0 nova_compute[117331]: 2025-10-09 16:31:57.281 2 DEBUG nova.compute.manager [req-288306c9-e94a-4742-8887-233b20c07dbc req-f75b6160-038b-4951-9cc7-d0e5421f952f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] No waiting events found dispatching network-vif-plugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:31:57 compute-0 nova_compute[117331]: 2025-10-09 16:31:57.281 2 WARNING nova.compute.manager [req-288306c9-e94a-4742-8887-233b20c07dbc req-f75b6160-038b-4951-9cc7-d0e5421f952f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received unexpected event network-vif-plugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 for instance with vm_state active and task_state None.
Oct 09 16:31:57 compute-0 nova_compute[117331]: 2025-10-09 16:31:57.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:31:57 compute-0 nova_compute[117331]: 2025-10-09 16:31:57.348 2 INFO nova.compute.manager [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Took 16.96 seconds to build instance.
Oct 09 16:31:57 compute-0 sshd-session[150005]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:31:57 compute-0 sshd-session[150005]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:31:57 compute-0 nova_compute[117331]: 2025-10-09 16:31:57.853 2 DEBUG oslo_concurrency.lockutils [None req-51bdc6fb-04ea-4bf5-89fb-88b926e5e428 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.491s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:31:58 compute-0 podman[150127]: 2025-10-09 16:31:58.872457696 +0000 UTC m=+0.088127259 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 09 16:31:58 compute-0 podman[150128]: 2025-10-09 16:31:58.877560998 +0000 UTC m=+0.079889968 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid)
Oct 09 16:31:59 compute-0 podman[127775]: time="2025-10-09T16:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:31:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20748 "" "Go-http-client/1.1"
Oct 09 16:31:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3488 "" "Go-http-client/1.1"
Oct 09 16:32:00 compute-0 sshd-session[150005]: Failed password for invalid user steam from 36.224.53.32 port 54582 ssh2
Oct 09 16:32:01 compute-0 openstack_network_exporter[129925]: ERROR   16:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:32:01 compute-0 openstack_network_exporter[129925]: ERROR   16:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:32:01 compute-0 openstack_network_exporter[129925]: ERROR   16:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:32:01 compute-0 openstack_network_exporter[129925]: ERROR   16:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:32:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:32:01 compute-0 openstack_network_exporter[129925]: ERROR   16:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:32:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:32:01 compute-0 nova_compute[117331]: 2025-10-09 16:32:01.971 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "861b59b6-7073-418e-8e9a-9281ce63040c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:01 compute-0 nova_compute[117331]: 2025-10-09 16:32:01.971 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "861b59b6-7073-418e-8e9a-9281ce63040c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:02 compute-0 nova_compute[117331]: 2025-10-09 16:32:02.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:02 compute-0 nova_compute[117331]: 2025-10-09 16:32:02.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:02 compute-0 sshd-session[150005]: Connection closed by invalid user steam 36.224.53.32 port 54582 [preauth]
Oct 09 16:32:02 compute-0 nova_compute[117331]: 2025-10-09 16:32:02.476 2 DEBUG nova.compute.manager [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:32:03 compute-0 nova_compute[117331]: 2025-10-09 16:32:03.021 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:03 compute-0 nova_compute[117331]: 2025-10-09 16:32:03.022 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:03 compute-0 nova_compute[117331]: 2025-10-09 16:32:03.030 2 DEBUG nova.virt.hardware [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:32:03 compute-0 nova_compute[117331]: 2025-10-09 16:32:03.030 2 INFO nova.compute.claims [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:32:04 compute-0 nova_compute[117331]: 2025-10-09 16:32:04.087 2 DEBUG nova.compute.provider_tree [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:32:04 compute-0 sshd-session[150167]: Invalid user test from 36.224.53.32 port 33922
Oct 09 16:32:04 compute-0 nova_compute[117331]: 2025-10-09 16:32:04.595 2 DEBUG nova.scheduler.client.report [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:32:04 compute-0 sshd-session[150167]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:32:04 compute-0 sshd-session[150167]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:32:05 compute-0 nova_compute[117331]: 2025-10-09 16:32:05.109 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.087s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:05 compute-0 nova_compute[117331]: 2025-10-09 16:32:05.110 2 DEBUG nova.compute.manager [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:32:05 compute-0 nova_compute[117331]: 2025-10-09 16:32:05.619 2 DEBUG nova.compute.manager [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:32:05 compute-0 nova_compute[117331]: 2025-10-09 16:32:05.620 2 DEBUG nova.network.neutron [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:32:05 compute-0 nova_compute[117331]: 2025-10-09 16:32:05.621 2 WARNING neutronclient.v2_0.client [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:32:05 compute-0 nova_compute[117331]: 2025-10-09 16:32:05.622 2 WARNING neutronclient.v2_0.client [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:32:06 compute-0 nova_compute[117331]: 2025-10-09 16:32:06.128 2 DEBUG nova.network.neutron [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Successfully created port: ddaf1c36-334b-432c-8d88-e0d7678f6c28 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:32:06 compute-0 nova_compute[117331]: 2025-10-09 16:32:06.132 2 INFO nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:32:06 compute-0 sshd-session[150167]: Failed password for invalid user test from 36.224.53.32 port 33922 ssh2
Oct 09 16:32:06 compute-0 nova_compute[117331]: 2025-10-09 16:32:06.640 2 DEBUG nova.compute.manager [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:32:06 compute-0 podman[150180]: 2025-10-09 16:32:06.835582381 +0000 UTC m=+0.065861703 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 09 16:32:06 compute-0 podman[150181]: 2025-10-09 16:32:06.87680094 +0000 UTC m=+0.100546584 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 09 16:32:06 compute-0 ovn_controller[19752]: 2025-10-09T16:32:06Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:95:fd:24 10.100.0.10
Oct 09 16:32:06 compute-0 ovn_controller[19752]: 2025-10-09T16:32:06Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:95:fd:24 10.100.0.10
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.385 2 DEBUG nova.network.neutron [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Successfully updated port: ddaf1c36-334b-432c-8d88-e0d7678f6c28 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.439 2 DEBUG nova.compute.manager [req-0f4844a9-ff14-4901-a883-3c726b892a34 req-c16b4c16-22d3-49cd-98f9-3c6e870d0516 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Received event network-changed-ddaf1c36-334b-432c-8d88-e0d7678f6c28 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.440 2 DEBUG nova.compute.manager [req-0f4844a9-ff14-4901-a883-3c726b892a34 req-c16b4c16-22d3-49cd-98f9-3c6e870d0516 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Refreshing instance network info cache due to event network-changed-ddaf1c36-334b-432c-8d88-e0d7678f6c28. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.440 2 DEBUG oslo_concurrency.lockutils [req-0f4844a9-ff14-4901-a883-3c726b892a34 req-c16b4c16-22d3-49cd-98f9-3c6e870d0516 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-861b59b6-7073-418e-8e9a-9281ce63040c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.440 2 DEBUG oslo_concurrency.lockutils [req-0f4844a9-ff14-4901-a883-3c726b892a34 req-c16b4c16-22d3-49cd-98f9-3c6e870d0516 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-861b59b6-7073-418e-8e9a-9281ce63040c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.441 2 DEBUG nova.network.neutron [req-0f4844a9-ff14-4901-a883-3c726b892a34 req-c16b4c16-22d3-49cd-98f9-3c6e870d0516 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Refreshing network info cache for port ddaf1c36-334b-432c-8d88-e0d7678f6c28 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.658 2 DEBUG nova.compute.manager [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.659 2 DEBUG nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.660 2 INFO nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Creating image(s)
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.660 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "/var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.660 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "/var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.661 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "/var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.661 2 DEBUG oslo_utils.imageutils.format_inspector [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.664 2 DEBUG oslo_utils.imageutils.format_inspector [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.666 2 DEBUG oslo_concurrency.processutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:32:07 compute-0 sshd-session[150167]: Connection closed by invalid user test 36.224.53.32 port 33922 [preauth]
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.754 2 DEBUG oslo_concurrency.processutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.755 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.756 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.757 2 DEBUG oslo_utils.imageutils.format_inspector [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.764 2 DEBUG oslo_utils.imageutils.format_inspector [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.765 2 DEBUG oslo_concurrency.processutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.828 2 DEBUG oslo_concurrency.processutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.829 2 DEBUG oslo_concurrency.processutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.871 2 DEBUG oslo_concurrency.processutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.872 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.115s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.873 2 DEBUG oslo_concurrency.processutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.891 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "refresh_cache-861b59b6-7073-418e-8e9a-9281ce63040c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.936 2 DEBUG oslo_concurrency.processutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.937 2 DEBUG nova.virt.disk.api [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Checking if we can resize image /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.937 2 DEBUG oslo_concurrency.processutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.948 2 WARNING neutronclient.v2_0.client [req-0f4844a9-ff14-4901-a883-3c726b892a34 req-c16b4c16-22d3-49cd-98f9-3c6e870d0516 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.998 2 DEBUG oslo_concurrency.processutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.999 2 DEBUG nova.virt.disk.api [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Cannot resize image /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.999 2 DEBUG nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:32:07 compute-0 nova_compute[117331]: 2025-10-09 16:32:07.999 2 DEBUG nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Ensure instance console log exists: /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:32:08 compute-0 nova_compute[117331]: 2025-10-09 16:32:08.000 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:08 compute-0 nova_compute[117331]: 2025-10-09 16:32:08.000 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:08 compute-0 nova_compute[117331]: 2025-10-09 16:32:08.000 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:08 compute-0 nova_compute[117331]: 2025-10-09 16:32:08.128 2 DEBUG nova.network.neutron [req-0f4844a9-ff14-4901-a883-3c726b892a34 req-c16b4c16-22d3-49cd-98f9-3c6e870d0516 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:32:08 compute-0 nova_compute[117331]: 2025-10-09 16:32:08.291 2 DEBUG nova.network.neutron [req-0f4844a9-ff14-4901-a883-3c726b892a34 req-c16b4c16-22d3-49cd-98f9-3c6e870d0516 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:32:08 compute-0 nova_compute[117331]: 2025-10-09 16:32:08.797 2 DEBUG oslo_concurrency.lockutils [req-0f4844a9-ff14-4901-a883-3c726b892a34 req-c16b4c16-22d3-49cd-98f9-3c6e870d0516 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-861b59b6-7073-418e-8e9a-9281ce63040c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:32:08 compute-0 nova_compute[117331]: 2025-10-09 16:32:08.799 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquired lock "refresh_cache-861b59b6-7073-418e-8e9a-9281ce63040c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:32:08 compute-0 nova_compute[117331]: 2025-10-09 16:32:08.799 2 DEBUG nova.network.neutron [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:32:10 compute-0 nova_compute[117331]: 2025-10-09 16:32:10.120 2 DEBUG nova.network.neutron [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.112 2 WARNING neutronclient.v2_0.client [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.259 2 DEBUG nova.network.neutron [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Updating instance_info_cache with network_info: [{"id": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "address": "fa:16:3e:1b:0c:10", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddaf1c36-33", "ovs_interfaceid": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.769 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Releasing lock "refresh_cache-861b59b6-7073-418e-8e9a-9281ce63040c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.770 2 DEBUG nova.compute.manager [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Instance network_info: |[{"id": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "address": "fa:16:3e:1b:0c:10", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddaf1c36-33", "ovs_interfaceid": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.772 2 DEBUG nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Start _get_guest_xml network_info=[{"id": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "address": "fa:16:3e:1b:0c:10", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddaf1c36-33", "ovs_interfaceid": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.781 2 WARNING nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.783 2 DEBUG nova.virt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteWorkloadBalanceStrategy-server-790928385', uuid='861b59b6-7073-418e-8e9a-9281ce63040c'), owner=OwnerMeta(userid='1c793380a6e945d69dacfd07f1f156f8', username='tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin', projectid='6d67ac3076434a4582e5db1ca7d043ff', projectname='tempest-TestExecuteWorkloadBalanceStrategy-2100042169'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "address": "fa:16:3e:1b:0c:10", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddaf1c36-33", "ovs_interfaceid": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760027531.7830849) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.787 2 DEBUG nova.virt.libvirt.host [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.787 2 DEBUG nova.virt.libvirt.host [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.790 2 DEBUG nova.virt.libvirt.host [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.790 2 DEBUG nova.virt.libvirt.host [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.791 2 DEBUG nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.791 2 DEBUG nova.virt.hardware [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.792 2 DEBUG nova.virt.hardware [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.792 2 DEBUG nova.virt.hardware [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.792 2 DEBUG nova.virt.hardware [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.792 2 DEBUG nova.virt.hardware [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.792 2 DEBUG nova.virt.hardware [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.793 2 DEBUG nova.virt.hardware [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.793 2 DEBUG nova.virt.hardware [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.793 2 DEBUG nova.virt.hardware [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.793 2 DEBUG nova.virt.hardware [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.793 2 DEBUG nova.virt.hardware [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.797 2 DEBUG nova.virt.libvirt.vif [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:32:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-790928385',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-790928385',id=23,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d67ac3076434a4582e5db1ca7d043ff',ramdisk_id='',reservation_id='r-fmbb4iaa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:32:06Z,user_data=None,user_id='1c793380a6e945d69dacfd07f1f156f8',uuid=861b59b6-7073-418e-8e9a-9281ce63040c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "address": "fa:16:3e:1b:0c:10", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddaf1c36-33", "ovs_interfaceid": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.798 2 DEBUG nova.network.os_vif_util [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converting VIF {"id": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "address": "fa:16:3e:1b:0c:10", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddaf1c36-33", "ovs_interfaceid": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.798 2 DEBUG nova.network.os_vif_util [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:0c:10,bridge_name='br-int',has_traffic_filtering=True,id=ddaf1c36-334b-432c-8d88-e0d7678f6c28,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddaf1c36-33') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:32:11 compute-0 nova_compute[117331]: 2025-10-09 16:32:11.799 2 DEBUG nova.objects.instance [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lazy-loading 'pci_devices' on Instance uuid 861b59b6-7073-418e-8e9a-9281ce63040c obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:32:12 compute-0 sshd-session[150240]: Connection reset by 147.185.132.109 port 58928 [preauth]
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.306 2 DEBUG nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:32:12 compute-0 nova_compute[117331]:   <uuid>861b59b6-7073-418e-8e9a-9281ce63040c</uuid>
Oct 09 16:32:12 compute-0 nova_compute[117331]:   <name>instance-00000017</name>
Oct 09 16:32:12 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:32:12 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:32:12 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-790928385</nova:name>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:32:11</nova:creationTime>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:32:12 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:32:12 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:32:12 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:32:12 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:32:12 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:32:12 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:32:12 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:32:12 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:32:12 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:32:12 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:32:12 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:32:12 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:32:12 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:32:12 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:32:12 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:32:12 compute-0 nova_compute[117331]:         <nova:user uuid="1c793380a6e945d69dacfd07f1f156f8">tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin</nova:user>
Oct 09 16:32:12 compute-0 nova_compute[117331]:         <nova:project uuid="6d67ac3076434a4582e5db1ca7d043ff">tempest-TestExecuteWorkloadBalanceStrategy-2100042169</nova:project>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:32:12 compute-0 nova_compute[117331]:         <nova:port uuid="ddaf1c36-334b-432c-8d88-e0d7678f6c28">
Oct 09 16:32:12 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:32:12 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:32:12 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <system>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <entry name="serial">861b59b6-7073-418e-8e9a-9281ce63040c</entry>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <entry name="uuid">861b59b6-7073-418e-8e9a-9281ce63040c</entry>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     </system>
Oct 09 16:32:12 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:32:12 compute-0 nova_compute[117331]:   <os>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:   </os>
Oct 09 16:32:12 compute-0 nova_compute[117331]:   <features>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:   </features>
Oct 09 16:32:12 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:32:12 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:32:12 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk.config"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:1b:0c:10"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <target dev="tapddaf1c36-33"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/console.log" append="off"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <video>
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     </video>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:32:12 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:32:12 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:32:12 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:32:12 compute-0 nova_compute[117331]: </domain>
Oct 09 16:32:12 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.307 2 DEBUG nova.compute.manager [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Preparing to wait for external event network-vif-plugged-ddaf1c36-334b-432c-8d88-e0d7678f6c28 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.308 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "861b59b6-7073-418e-8e9a-9281ce63040c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.308 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "861b59b6-7073-418e-8e9a-9281ce63040c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.309 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "861b59b6-7073-418e-8e9a-9281ce63040c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.310 2 DEBUG nova.virt.libvirt.vif [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:32:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-790928385',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-790928385',id=23,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d67ac3076434a4582e5db1ca7d043ff',ramdisk_id='',reservation_id='r-fmbb4iaa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:32:06Z,user_data=None,user_id='1c793380a6e945d69dacfd07f1f156f8',uuid=861b59b6-7073-418e-8e9a-9281ce63040c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "address": "fa:16:3e:1b:0c:10", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddaf1c36-33", "ovs_interfaceid": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.310 2 DEBUG nova.network.os_vif_util [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converting VIF {"id": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "address": "fa:16:3e:1b:0c:10", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddaf1c36-33", "ovs_interfaceid": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.311 2 DEBUG nova.network.os_vif_util [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:0c:10,bridge_name='br-int',has_traffic_filtering=True,id=ddaf1c36-334b-432c-8d88-e0d7678f6c28,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddaf1c36-33') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.311 2 DEBUG os_vif [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:0c:10,bridge_name='br-int',has_traffic_filtering=True,id=ddaf1c36-334b-432c-8d88-e0d7678f6c28,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddaf1c36-33') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.312 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.312 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.313 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '9537c773-2bdd-55e5-9d67-a53198e6e913', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.319 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddaf1c36-33, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.320 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapddaf1c36-33, col_values=(('qos', UUID('0c43c53c-e3ac-4495-8c7c-c708c0854d05')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.320 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapddaf1c36-33, col_values=(('external_ids', {'iface-id': 'ddaf1c36-334b-432c-8d88-e0d7678f6c28', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:0c:10', 'vm-uuid': '861b59b6-7073-418e-8e9a-9281ce63040c'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:12 compute-0 NetworkManager[1028]: <info>  [1760027532.3228] manager: (tapddaf1c36-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:12 compute-0 nova_compute[117331]: 2025-10-09 16:32:12.330 2 INFO os_vif [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:0c:10,bridge_name='br-int',has_traffic_filtering=True,id=ddaf1c36-334b-432c-8d88-e0d7678f6c28,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddaf1c36-33')
Oct 09 16:32:12 compute-0 sshd-session[150228]: Invalid user dev from 36.224.53.32 port 41904
Oct 09 16:32:13 compute-0 sshd-session[150228]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:32:13 compute-0 sshd-session[150228]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:32:13 compute-0 nova_compute[117331]: 2025-10-09 16:32:13.883 2 DEBUG nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:32:13 compute-0 nova_compute[117331]: 2025-10-09 16:32:13.883 2 DEBUG nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:32:13 compute-0 nova_compute[117331]: 2025-10-09 16:32:13.883 2 DEBUG nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] No VIF found with MAC fa:16:3e:1b:0c:10, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:32:13 compute-0 nova_compute[117331]: 2025-10-09 16:32:13.884 2 INFO nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Using config drive
Oct 09 16:32:14 compute-0 nova_compute[117331]: 2025-10-09 16:32:14.394 2 WARNING neutronclient.v2_0.client [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:32:15 compute-0 nova_compute[117331]: 2025-10-09 16:32:15.233 2 INFO nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Creating config drive at /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk.config
Oct 09 16:32:15 compute-0 nova_compute[117331]: 2025-10-09 16:32:15.237 2 DEBUG oslo_concurrency.processutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmplxtg_8x4 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:32:15 compute-0 sshd-session[150228]: Failed password for invalid user dev from 36.224.53.32 port 41904 ssh2
Oct 09 16:32:15 compute-0 nova_compute[117331]: 2025-10-09 16:32:15.379 2 DEBUG oslo_concurrency.processutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmplxtg_8x4" returned: 0 in 0.142s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:32:15 compute-0 kernel: tapddaf1c36-33: entered promiscuous mode
Oct 09 16:32:15 compute-0 NetworkManager[1028]: <info>  [1760027535.4538] manager: (tapddaf1c36-33): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Oct 09 16:32:15 compute-0 ovn_controller[19752]: 2025-10-09T16:32:15Z|00208|binding|INFO|Claiming lport ddaf1c36-334b-432c-8d88-e0d7678f6c28 for this chassis.
Oct 09 16:32:15 compute-0 ovn_controller[19752]: 2025-10-09T16:32:15Z|00209|binding|INFO|ddaf1c36-334b-432c-8d88-e0d7678f6c28: Claiming fa:16:3e:1b:0c:10 10.100.0.6
Oct 09 16:32:15 compute-0 nova_compute[117331]: 2025-10-09 16:32:15.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:15.473 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:0c:10 10.100.0.6'], port_security=['fa:16:3e:1b:0c:10 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '861b59b6-7073-418e-8e9a-9281ce63040c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54b37568-476a-40a0-b545-fe5401f85653', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d67ac3076434a4582e5db1ca7d043ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '94abd997-3903-47b3-abe3-e283a4232c96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc641581-8c4b-4cef-982e-bb0cf11a52ba, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=ddaf1c36-334b-432c-8d88-e0d7678f6c28) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:32:15 compute-0 ovn_controller[19752]: 2025-10-09T16:32:15Z|00210|binding|INFO|Setting lport ddaf1c36-334b-432c-8d88-e0d7678f6c28 ovn-installed in OVS
Oct 09 16:32:15 compute-0 ovn_controller[19752]: 2025-10-09T16:32:15Z|00211|binding|INFO|Setting lport ddaf1c36-334b-432c-8d88-e0d7678f6c28 up in Southbound
Oct 09 16:32:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:15.474 28613 INFO neutron.agent.ovn.metadata.agent [-] Port ddaf1c36-334b-432c-8d88-e0d7678f6c28 in datapath 54b37568-476a-40a0-b545-fe5401f85653 bound to our chassis
Oct 09 16:32:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:15.475 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 54b37568-476a-40a0-b545-fe5401f85653
Oct 09 16:32:15 compute-0 nova_compute[117331]: 2025-10-09 16:32:15.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:15 compute-0 nova_compute[117331]: 2025-10-09 16:32:15.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:15 compute-0 systemd-udevd[150264]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:32:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:15.496 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d220d638-377c-4ced-adce-3fd066072479]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:32:15 compute-0 NetworkManager[1028]: <info>  [1760027535.5026] device (tapddaf1c36-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:32:15 compute-0 NetworkManager[1028]: <info>  [1760027535.5033] device (tapddaf1c36-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:32:15 compute-0 systemd-machined[77487]: New machine qemu-17-instance-00000017.
Oct 09 16:32:15 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000017.
Oct 09 16:32:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:15.530 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[f79ae0be-f6ce-4200-b65c-cb38767ded34]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:32:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:15.532 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[2736b460-dd94-4b09-8e41-7ae030de33fe]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:32:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:15.558 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[66c5afbc-fa37-4528-96af-45401b7e7e14]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:32:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:15.578 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[cdee4f00-568a-41f3-8f3c-70d77cbb29fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54b37568-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:3a:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 245852, 'reachable_time': 39101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 150278, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:32:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:15.594 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[7c8d15a7-ea40-450d-a3c9-4722debf5bd1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap54b37568-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 245863, 'tstamp': 245863}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 150280, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap54b37568-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 245867, 'tstamp': 245867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 150280, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:32:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:15.596 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54b37568-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:32:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:15.598 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54b37568-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:32:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:15.598 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:32:15 compute-0 nova_compute[117331]: 2025-10-09 16:32:15.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:15.598 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap54b37568-40, col_values=(('external_ids', {'iface-id': '01dee844-02ac-4caa-80f7-8a16019cbd9d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:32:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:15.599 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:32:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:15.600 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c0c1a2-7506-4a66-ae5a-c33df6ed9684]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-54b37568-476a-40a0-b545-fe5401f85653\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 54b37568-476a-40a0-b545-fe5401f85653\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:32:15 compute-0 nova_compute[117331]: 2025-10-09 16:32:15.735 2 DEBUG nova.compute.manager [req-b9dce01c-3b78-4591-bc95-29465fe2d8f0 req-cf1f26bf-2736-49ee-82b1-2b8be05447f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Received event network-vif-plugged-ddaf1c36-334b-432c-8d88-e0d7678f6c28 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:32:15 compute-0 nova_compute[117331]: 2025-10-09 16:32:15.736 2 DEBUG oslo_concurrency.lockutils [req-b9dce01c-3b78-4591-bc95-29465fe2d8f0 req-cf1f26bf-2736-49ee-82b1-2b8be05447f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "861b59b6-7073-418e-8e9a-9281ce63040c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:15 compute-0 nova_compute[117331]: 2025-10-09 16:32:15.737 2 DEBUG oslo_concurrency.lockutils [req-b9dce01c-3b78-4591-bc95-29465fe2d8f0 req-cf1f26bf-2736-49ee-82b1-2b8be05447f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "861b59b6-7073-418e-8e9a-9281ce63040c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:15 compute-0 nova_compute[117331]: 2025-10-09 16:32:15.737 2 DEBUG oslo_concurrency.lockutils [req-b9dce01c-3b78-4591-bc95-29465fe2d8f0 req-cf1f26bf-2736-49ee-82b1-2b8be05447f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "861b59b6-7073-418e-8e9a-9281ce63040c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:15 compute-0 nova_compute[117331]: 2025-10-09 16:32:15.738 2 DEBUG nova.compute.manager [req-b9dce01c-3b78-4591-bc95-29465fe2d8f0 req-cf1f26bf-2736-49ee-82b1-2b8be05447f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Processing event network-vif-plugged-ddaf1c36-334b-432c-8d88-e0d7678f6c28 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:32:16 compute-0 nova_compute[117331]: 2025-10-09 16:32:16.200 2 DEBUG nova.compute.manager [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:32:16 compute-0 nova_compute[117331]: 2025-10-09 16:32:16.204 2 DEBUG nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:32:16 compute-0 nova_compute[117331]: 2025-10-09 16:32:16.207 2 INFO nova.virt.libvirt.driver [-] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Instance spawned successfully.
Oct 09 16:32:16 compute-0 nova_compute[117331]: 2025-10-09 16:32:16.207 2 DEBUG nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:32:16 compute-0 sshd-session[150228]: Connection closed by invalid user dev 36.224.53.32 port 41904 [preauth]
Oct 09 16:32:16 compute-0 nova_compute[117331]: 2025-10-09 16:32:16.719 2 DEBUG nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:32:16 compute-0 nova_compute[117331]: 2025-10-09 16:32:16.720 2 DEBUG nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:32:16 compute-0 nova_compute[117331]: 2025-10-09 16:32:16.721 2 DEBUG nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:32:16 compute-0 nova_compute[117331]: 2025-10-09 16:32:16.722 2 DEBUG nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:32:16 compute-0 nova_compute[117331]: 2025-10-09 16:32:16.722 2 DEBUG nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:32:16 compute-0 nova_compute[117331]: 2025-10-09 16:32:16.723 2 DEBUG nova.virt.libvirt.driver [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:32:17 compute-0 nova_compute[117331]: 2025-10-09 16:32:17.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:17 compute-0 nova_compute[117331]: 2025-10-09 16:32:17.233 2 INFO nova.compute.manager [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Took 9.57 seconds to spawn the instance on the hypervisor.
Oct 09 16:32:17 compute-0 nova_compute[117331]: 2025-10-09 16:32:17.234 2 DEBUG nova.compute.manager [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:32:17 compute-0 nova_compute[117331]: 2025-10-09 16:32:17.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:17 compute-0 nova_compute[117331]: 2025-10-09 16:32:17.764 2 INFO nova.compute.manager [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Took 14.78 seconds to build instance.
Oct 09 16:32:17 compute-0 nova_compute[117331]: 2025-10-09 16:32:17.786 2 DEBUG nova.compute.manager [req-cb3f785d-1e41-49cb-8632-1f2821df1943 req-c18b02d3-3ef9-4ecd-8209-5a0fab8205f3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Received event network-vif-plugged-ddaf1c36-334b-432c-8d88-e0d7678f6c28 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:32:17 compute-0 nova_compute[117331]: 2025-10-09 16:32:17.786 2 DEBUG oslo_concurrency.lockutils [req-cb3f785d-1e41-49cb-8632-1f2821df1943 req-c18b02d3-3ef9-4ecd-8209-5a0fab8205f3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "861b59b6-7073-418e-8e9a-9281ce63040c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:17 compute-0 nova_compute[117331]: 2025-10-09 16:32:17.787 2 DEBUG oslo_concurrency.lockutils [req-cb3f785d-1e41-49cb-8632-1f2821df1943 req-c18b02d3-3ef9-4ecd-8209-5a0fab8205f3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "861b59b6-7073-418e-8e9a-9281ce63040c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:17 compute-0 nova_compute[117331]: 2025-10-09 16:32:17.787 2 DEBUG oslo_concurrency.lockutils [req-cb3f785d-1e41-49cb-8632-1f2821df1943 req-c18b02d3-3ef9-4ecd-8209-5a0fab8205f3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "861b59b6-7073-418e-8e9a-9281ce63040c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:17 compute-0 nova_compute[117331]: 2025-10-09 16:32:17.787 2 DEBUG nova.compute.manager [req-cb3f785d-1e41-49cb-8632-1f2821df1943 req-c18b02d3-3ef9-4ecd-8209-5a0fab8205f3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] No waiting events found dispatching network-vif-plugged-ddaf1c36-334b-432c-8d88-e0d7678f6c28 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:32:17 compute-0 nova_compute[117331]: 2025-10-09 16:32:17.787 2 WARNING nova.compute.manager [req-cb3f785d-1e41-49cb-8632-1f2821df1943 req-c18b02d3-3ef9-4ecd-8209-5a0fab8205f3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Received unexpected event network-vif-plugged-ddaf1c36-334b-432c-8d88-e0d7678f6c28 for instance with vm_state active and task_state None.
Oct 09 16:32:18 compute-0 nova_compute[117331]: 2025-10-09 16:32:18.270 2 DEBUG oslo_concurrency.lockutils [None req-ab62bb55-f443-4585-a5d7-8bd85d66e3a5 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "861b59b6-7073-418e-8e9a-9281ce63040c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.299s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:19 compute-0 sshd-session[150246]: Invalid user kubernetes from 36.224.53.32 port 50140
Oct 09 16:32:19 compute-0 podman[150289]: 2025-10-09 16:32:19.984136238 +0000 UTC m=+0.076979065 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:32:20 compute-0 sshd-session[150246]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:32:20 compute-0 sshd-session[150246]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:32:22 compute-0 nova_compute[117331]: 2025-10-09 16:32:22.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:22 compute-0 nova_compute[117331]: 2025-10-09 16:32:22.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:23 compute-0 sshd-session[150246]: Failed password for invalid user kubernetes from 36.224.53.32 port 50140 ssh2
Oct 09 16:32:23 compute-0 podman[150311]: 2025-10-09 16:32:23.822327512 +0000 UTC m=+0.055737030 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:32:23 compute-0 sshd-session[150246]: Connection closed by invalid user kubernetes 36.224.53.32 port 50140 [preauth]
Oct 09 16:32:24 compute-0 nova_compute[117331]: 2025-10-09 16:32:24.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:32:25 compute-0 sshd-session[150309]: Invalid user guest from 36.224.53.32 port 58410
Oct 09 16:32:26 compute-0 ovn_controller[19752]: 2025-10-09T16:32:26Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1b:0c:10 10.100.0.6
Oct 09 16:32:26 compute-0 ovn_controller[19752]: 2025-10-09T16:32:26Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1b:0c:10 10.100.0.6
Oct 09 16:32:27 compute-0 nova_compute[117331]: 2025-10-09 16:32:27.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:27 compute-0 nova_compute[117331]: 2025-10-09 16:32:27.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:32:27 compute-0 nova_compute[117331]: 2025-10-09 16:32:27.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:27 compute-0 sshd-session[150309]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:32:27 compute-0 sshd-session[150309]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:32:28 compute-0 nova_compute[117331]: 2025-10-09 16:32:28.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:32:28 compute-0 nova_compute[117331]: 2025-10-09 16:32:28.306 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:32:29 compute-0 sshd-session[150309]: Failed password for invalid user guest from 36.224.53.32 port 58410 ssh2
Oct 09 16:32:29 compute-0 nova_compute[117331]: 2025-10-09 16:32:29.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:32:29 compute-0 sshd-session[150309]: Connection closed by invalid user guest 36.224.53.32 port 58410 [preauth]
Oct 09 16:32:29 compute-0 podman[127775]: time="2025-10-09T16:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:32:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20748 "" "Go-http-client/1.1"
Oct 09 16:32:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3488 "" "Go-http-client/1.1"
Oct 09 16:32:29 compute-0 nova_compute[117331]: 2025-10-09 16:32:29.822 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:29 compute-0 nova_compute[117331]: 2025-10-09 16:32:29.822 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:29 compute-0 nova_compute[117331]: 2025-10-09 16:32:29.823 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:29 compute-0 nova_compute[117331]: 2025-10-09 16:32:29.823 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:32:29 compute-0 podman[150346]: 2025-10-09 16:32:29.832553926 +0000 UTC m=+0.056546327 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 16:32:29 compute-0 podman[150345]: 2025-10-09 16:32:29.836081708 +0000 UTC m=+0.060361577 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent)
Oct 09 16:32:30 compute-0 nova_compute[117331]: 2025-10-09 16:32:30.884 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:32:30 compute-0 nova_compute[117331]: 2025-10-09 16:32:30.976 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:32:30 compute-0 nova_compute[117331]: 2025-10-09 16:32:30.977 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:32:31 compute-0 nova_compute[117331]: 2025-10-09 16:32:31.046 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:32:31 compute-0 nova_compute[117331]: 2025-10-09 16:32:31.052 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:32:31 compute-0 nova_compute[117331]: 2025-10-09 16:32:31.118 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:32:31 compute-0 nova_compute[117331]: 2025-10-09 16:32:31.119 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:32:31 compute-0 nova_compute[117331]: 2025-10-09 16:32:31.173 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:32:31 compute-0 nova_compute[117331]: 2025-10-09 16:32:31.324 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:32:31 compute-0 nova_compute[117331]: 2025-10-09 16:32:31.325 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:32:31 compute-0 nova_compute[117331]: 2025-10-09 16:32:31.342 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.017s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:32:31 compute-0 nova_compute[117331]: 2025-10-09 16:32:31.343 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5783MB free_disk=73.19897079467773GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:32:31 compute-0 nova_compute[117331]: 2025-10-09 16:32:31.343 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:31 compute-0 nova_compute[117331]: 2025-10-09 16:32:31.343 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:31 compute-0 openstack_network_exporter[129925]: ERROR   16:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:32:31 compute-0 openstack_network_exporter[129925]: ERROR   16:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:32:31 compute-0 openstack_network_exporter[129925]: ERROR   16:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:32:31 compute-0 openstack_network_exporter[129925]: ERROR   16:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:32:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:32:31 compute-0 openstack_network_exporter[129925]: ERROR   16:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:32:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:32:32 compute-0 nova_compute[117331]: 2025-10-09 16:32:32.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:32 compute-0 nova_compute[117331]: 2025-10-09 16:32:32.362 2 DEBUG nova.virt.libvirt.driver [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Check if temp file /var/lib/nova/instances/tmp7d726p3e exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Oct 09 16:32:32 compute-0 nova_compute[117331]: 2025-10-09 16:32:32.377 2 DEBUG nova.compute.manager [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp7d726p3e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='4c51c9df-777a-497d-bf84-a8001b45a4f0',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Oct 09 16:32:32 compute-0 nova_compute[117331]: 2025-10-09 16:32:32.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:32 compute-0 nova_compute[117331]: 2025-10-09 16:32:32.405 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance 861b59b6-7073-418e-8e9a-9281ce63040c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:32:32 compute-0 nova_compute[117331]: 2025-10-09 16:32:32.911 2 INFO nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance 5e29434f-93a8-4705-b4f3-b51901ab57ee has allocations against this compute host but is not found in the database.
Oct 09 16:32:32 compute-0 nova_compute[117331]: 2025-10-09 16:32:32.912 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:32:32 compute-0 nova_compute[117331]: 2025-10-09 16:32:32.912 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:32:31 up 41 min,  0 user,  load average: 0.82, 0.68, 0.50\n', 'num_instances': '2', 'num_vm_active': '2', 'num_task_migrating': '1', 'num_os_type_None': '2', 'num_proj_6d67ac3076434a4582e5db1ca7d043ff': '2', 'io_workload': '0', 'num_task_None': '1'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:32:32 compute-0 nova_compute[117331]: 2025-10-09 16:32:32.981 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:32:33 compute-0 nova_compute[117331]: 2025-10-09 16:32:33.505 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:32:34 compute-0 nova_compute[117331]: 2025-10-09 16:32:34.034 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:32:34 compute-0 nova_compute[117331]: 2025-10-09 16:32:34.035 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.691s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:34 compute-0 sshd-session[150344]: Invalid user orangepi from 36.224.53.32 port 38062
Oct 09 16:32:34 compute-0 sshd-session[150344]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:32:34 compute-0 sshd-session[150344]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:32:35 compute-0 nova_compute[117331]: 2025-10-09 16:32:35.032 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:32:35 compute-0 nova_compute[117331]: 2025-10-09 16:32:35.032 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:32:35 compute-0 nova_compute[117331]: 2025-10-09 16:32:35.033 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:32:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:35.326 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:35.326 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:35.326 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:36 compute-0 sshd-session[150344]: Failed password for invalid user orangepi from 36.224.53.32 port 38062 ssh2
Oct 09 16:32:36 compute-0 nova_compute[117331]: 2025-10-09 16:32:36.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:32:36 compute-0 nova_compute[117331]: 2025-10-09 16:32:36.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:32:36 compute-0 nova_compute[117331]: 2025-10-09 16:32:36.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 09 16:32:37 compute-0 nova_compute[117331]: 2025-10-09 16:32:37.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:37 compute-0 sshd-session[150344]: Connection closed by invalid user orangepi 36.224.53.32 port 38062 [preauth]
Oct 09 16:32:37 compute-0 nova_compute[117331]: 2025-10-09 16:32:37.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:37 compute-0 nova_compute[117331]: 2025-10-09 16:32:37.393 2 DEBUG oslo_concurrency.processutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:32:37 compute-0 nova_compute[117331]: 2025-10-09 16:32:37.476 2 DEBUG oslo_concurrency.processutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:32:37 compute-0 nova_compute[117331]: 2025-10-09 16:32:37.478 2 DEBUG oslo_concurrency.processutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:32:37 compute-0 nova_compute[117331]: 2025-10-09 16:32:37.542 2 DEBUG oslo_concurrency.processutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:32:37 compute-0 nova_compute[117331]: 2025-10-09 16:32:37.543 2 DEBUG nova.compute.manager [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Preparing to wait for external event network-vif-plugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:32:37 compute-0 nova_compute[117331]: 2025-10-09 16:32:37.543 2 DEBUG oslo_concurrency.lockutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:37 compute-0 nova_compute[117331]: 2025-10-09 16:32:37.544 2 DEBUG oslo_concurrency.lockutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:37 compute-0 nova_compute[117331]: 2025-10-09 16:32:37.544 2 DEBUG oslo_concurrency.lockutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:37 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 09 16:32:37 compute-0 nova_compute[117331]: 2025-10-09 16:32:37.808 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:32:37 compute-0 podman[150408]: 2025-10-09 16:32:37.848327686 +0000 UTC m=+0.073334810 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., architecture=x86_64, release=1755695350, io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9)
Oct 09 16:32:37 compute-0 podman[150409]: 2025-10-09 16:32:37.875199239 +0000 UTC m=+0.098534259 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Oct 09 16:32:38 compute-0 nova_compute[117331]: 2025-10-09 16:32:38.318 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:32:38 compute-0 nova_compute[117331]: 2025-10-09 16:32:38.318 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 09 16:32:38 compute-0 nova_compute[117331]: 2025-10-09 16:32:38.852 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 09 16:32:38 compute-0 nova_compute[117331]: 2025-10-09 16:32:38.853 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:32:39 compute-0 sshd-session[150400]: Invalid user fedora from 36.224.53.32 port 46058
Oct 09 16:32:41 compute-0 sshd-session[150400]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:32:41 compute-0 sshd-session[150400]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:32:42 compute-0 nova_compute[117331]: 2025-10-09 16:32:42.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:42 compute-0 nova_compute[117331]: 2025-10-09 16:32:42.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:42 compute-0 sshd-session[150400]: Failed password for invalid user fedora from 36.224.53.32 port 46058 ssh2
Oct 09 16:32:43 compute-0 sshd-session[150400]: Connection closed by invalid user fedora 36.224.53.32 port 46058 [preauth]
Oct 09 16:32:43 compute-0 nova_compute[117331]: 2025-10-09 16:32:43.489 2 DEBUG nova.compute.manager [req-cfff732e-87a8-4ac9-8571-33ccfce71251 req-ee97b07b-518f-4dd9-88dd-61808dcb76f1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received event network-vif-unplugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:32:43 compute-0 nova_compute[117331]: 2025-10-09 16:32:43.490 2 DEBUG oslo_concurrency.lockutils [req-cfff732e-87a8-4ac9-8571-33ccfce71251 req-ee97b07b-518f-4dd9-88dd-61808dcb76f1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:43 compute-0 nova_compute[117331]: 2025-10-09 16:32:43.490 2 DEBUG oslo_concurrency.lockutils [req-cfff732e-87a8-4ac9-8571-33ccfce71251 req-ee97b07b-518f-4dd9-88dd-61808dcb76f1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:43 compute-0 nova_compute[117331]: 2025-10-09 16:32:43.490 2 DEBUG oslo_concurrency.lockutils [req-cfff732e-87a8-4ac9-8571-33ccfce71251 req-ee97b07b-518f-4dd9-88dd-61808dcb76f1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:43 compute-0 nova_compute[117331]: 2025-10-09 16:32:43.490 2 DEBUG nova.compute.manager [req-cfff732e-87a8-4ac9-8571-33ccfce71251 req-ee97b07b-518f-4dd9-88dd-61808dcb76f1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] No event matching network-vif-unplugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 in dict_keys([('network-vif-plugged', '783ccf1e-b85c-4420-b3e9-705ddf5495d3')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Oct 09 16:32:43 compute-0 nova_compute[117331]: 2025-10-09 16:32:43.491 2 DEBUG nova.compute.manager [req-cfff732e-87a8-4ac9-8571-33ccfce71251 req-ee97b07b-518f-4dd9-88dd-61808dcb76f1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received event network-vif-unplugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:32:45 compute-0 nova_compute[117331]: 2025-10-09 16:32:45.072 2 INFO nova.compute.manager [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Took 7.53 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Oct 09 16:32:45 compute-0 ovn_controller[19752]: 2025-10-09T16:32:45Z|00212|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 09 16:32:45 compute-0 nova_compute[117331]: 2025-10-09 16:32:45.574 2 DEBUG nova.compute.manager [req-2c6fa29a-e5ca-4aa5-a050-a5a7d68b32ff req-c69d26b1-57ee-48ec-8c04-8fc3dedc1899 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received event network-vif-plugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:32:45 compute-0 nova_compute[117331]: 2025-10-09 16:32:45.575 2 DEBUG oslo_concurrency.lockutils [req-2c6fa29a-e5ca-4aa5-a050-a5a7d68b32ff req-c69d26b1-57ee-48ec-8c04-8fc3dedc1899 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:45 compute-0 nova_compute[117331]: 2025-10-09 16:32:45.575 2 DEBUG oslo_concurrency.lockutils [req-2c6fa29a-e5ca-4aa5-a050-a5a7d68b32ff req-c69d26b1-57ee-48ec-8c04-8fc3dedc1899 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:45 compute-0 nova_compute[117331]: 2025-10-09 16:32:45.575 2 DEBUG oslo_concurrency.lockutils [req-2c6fa29a-e5ca-4aa5-a050-a5a7d68b32ff req-c69d26b1-57ee-48ec-8c04-8fc3dedc1899 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:45 compute-0 nova_compute[117331]: 2025-10-09 16:32:45.575 2 DEBUG nova.compute.manager [req-2c6fa29a-e5ca-4aa5-a050-a5a7d68b32ff req-c69d26b1-57ee-48ec-8c04-8fc3dedc1899 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Processing event network-vif-plugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:32:45 compute-0 nova_compute[117331]: 2025-10-09 16:32:45.576 2 DEBUG nova.compute.manager [req-2c6fa29a-e5ca-4aa5-a050-a5a7d68b32ff req-c69d26b1-57ee-48ec-8c04-8fc3dedc1899 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received event network-changed-783ccf1e-b85c-4420-b3e9-705ddf5495d3 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:32:45 compute-0 nova_compute[117331]: 2025-10-09 16:32:45.576 2 DEBUG nova.compute.manager [req-2c6fa29a-e5ca-4aa5-a050-a5a7d68b32ff req-c69d26b1-57ee-48ec-8c04-8fc3dedc1899 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Refreshing instance network info cache due to event network-changed-783ccf1e-b85c-4420-b3e9-705ddf5495d3. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:32:45 compute-0 nova_compute[117331]: 2025-10-09 16:32:45.576 2 DEBUG oslo_concurrency.lockutils [req-2c6fa29a-e5ca-4aa5-a050-a5a7d68b32ff req-c69d26b1-57ee-48ec-8c04-8fc3dedc1899 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-4c51c9df-777a-497d-bf84-a8001b45a4f0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:32:45 compute-0 nova_compute[117331]: 2025-10-09 16:32:45.576 2 DEBUG oslo_concurrency.lockutils [req-2c6fa29a-e5ca-4aa5-a050-a5a7d68b32ff req-c69d26b1-57ee-48ec-8c04-8fc3dedc1899 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-4c51c9df-777a-497d-bf84-a8001b45a4f0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:32:45 compute-0 nova_compute[117331]: 2025-10-09 16:32:45.576 2 DEBUG nova.network.neutron [req-2c6fa29a-e5ca-4aa5-a050-a5a7d68b32ff req-c69d26b1-57ee-48ec-8c04-8fc3dedc1899 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Refreshing network info cache for port 783ccf1e-b85c-4420-b3e9-705ddf5495d3 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:32:45 compute-0 nova_compute[117331]: 2025-10-09 16:32:45.577 2 DEBUG nova.compute.manager [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:32:46 compute-0 nova_compute[117331]: 2025-10-09 16:32:46.082 2 WARNING neutronclient.v2_0.client [req-2c6fa29a-e5ca-4aa5-a050-a5a7d68b32ff req-c69d26b1-57ee-48ec-8c04-8fc3dedc1899 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:32:46 compute-0 nova_compute[117331]: 2025-10-09 16:32:46.087 2 DEBUG nova.compute.manager [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp7d726p3e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='4c51c9df-777a-497d-bf84-a8001b45a4f0',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(5e29434f-93a8-4705-b4f3-b51901ab57ee),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Oct 09 16:32:46 compute-0 nova_compute[117331]: 2025-10-09 16:32:46.443 2 WARNING neutronclient.v2_0.client [req-2c6fa29a-e5ca-4aa5-a050-a5a7d68b32ff req-c69d26b1-57ee-48ec-8c04-8fc3dedc1899 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:32:46 compute-0 nova_compute[117331]: 2025-10-09 16:32:46.595 2 DEBUG nova.network.neutron [req-2c6fa29a-e5ca-4aa5-a050-a5a7d68b32ff req-c69d26b1-57ee-48ec-8c04-8fc3dedc1899 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Updated VIF entry in instance network info cache for port 783ccf1e-b85c-4420-b3e9-705ddf5495d3. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Oct 09 16:32:46 compute-0 nova_compute[117331]: 2025-10-09 16:32:46.595 2 DEBUG nova.network.neutron [req-2c6fa29a-e5ca-4aa5-a050-a5a7d68b32ff req-c69d26b1-57ee-48ec-8c04-8fc3dedc1899 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Updating instance_info_cache with network_info: [{"id": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "address": "fa:16:3e:95:fd:24", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap783ccf1e-b8", "ovs_interfaceid": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:32:46 compute-0 nova_compute[117331]: 2025-10-09 16:32:46.601 2 DEBUG nova.objects.instance [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'migration_context' on Instance uuid 4c51c9df-777a-497d-bf84-a8001b45a4f0 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:32:46 compute-0 nova_compute[117331]: 2025-10-09 16:32:46.602 2 DEBUG nova.virt.libvirt.driver [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Oct 09 16:32:46 compute-0 nova_compute[117331]: 2025-10-09 16:32:46.603 2 DEBUG nova.virt.libvirt.driver [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:32:46 compute-0 nova_compute[117331]: 2025-10-09 16:32:46.604 2 DEBUG nova.virt.libvirt.driver [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:32:47 compute-0 nova_compute[117331]: 2025-10-09 16:32:47.106 2 DEBUG nova.virt.libvirt.driver [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:32:47 compute-0 nova_compute[117331]: 2025-10-09 16:32:47.106 2 DEBUG nova.virt.libvirt.driver [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:32:47 compute-0 nova_compute[117331]: 2025-10-09 16:32:47.117 2 DEBUG oslo_concurrency.lockutils [req-2c6fa29a-e5ca-4aa5-a050-a5a7d68b32ff req-c69d26b1-57ee-48ec-8c04-8fc3dedc1899 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-4c51c9df-777a-497d-bf84-a8001b45a4f0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:32:47 compute-0 nova_compute[117331]: 2025-10-09 16:32:47.137 2 DEBUG nova.virt.libvirt.vif [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:31:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-445686334',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-445686334',id=22,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:31:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6d67ac3076434a4582e5db1ca7d043ff',ramdisk_id='',reservation_id='r-0ja4884i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:31:56Z,user_data=None,user_id='1c793380a6e945d69dacfd07f1f156f8',uuid=4c51c9df-777a-497d-bf84-a8001b45a4f0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "address": "fa:16:3e:95:fd:24", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap783ccf1e-b8", "ovs_interfaceid": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:32:47 compute-0 nova_compute[117331]: 2025-10-09 16:32:47.137 2 DEBUG nova.network.os_vif_util [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "address": "fa:16:3e:95:fd:24", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap783ccf1e-b8", "ovs_interfaceid": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:32:47 compute-0 nova_compute[117331]: 2025-10-09 16:32:47.138 2 DEBUG nova.network.os_vif_util [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:fd:24,bridge_name='br-int',has_traffic_filtering=True,id=783ccf1e-b85c-4420-b3e9-705ddf5495d3,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap783ccf1e-b8') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:32:47 compute-0 nova_compute[117331]: 2025-10-09 16:32:47.138 2 DEBUG nova.virt.libvirt.migration [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Updating guest XML with vif config: <interface type="ethernet">
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <mac address="fa:16:3e:95:fd:24"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <model type="virtio"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <mtu size="1442"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <target dev="tap783ccf1e-b8"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]: </interface>
Oct 09 16:32:47 compute-0 nova_compute[117331]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Oct 09 16:32:47 compute-0 nova_compute[117331]: 2025-10-09 16:32:47.139 2 DEBUG nova.virt.libvirt.migration [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <name>instance-00000016</name>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <uuid>4c51c9df-777a-497d-bf84-a8001b45a4f0</uuid>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-445686334</nova:name>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:31:51</nova:creationTime>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:32:47 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:32:47 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:user uuid="1c793380a6e945d69dacfd07f1f156f8">tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin</nova:user>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:project uuid="6d67ac3076434a4582e5db1ca7d043ff">tempest-TestExecuteWorkloadBalanceStrategy-2100042169</nova:project>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:port uuid="783ccf1e-b85c-4420-b3e9-705ddf5495d3">
Oct 09 16:32:47 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <system>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <entry name="serial">4c51c9df-777a-497d-bf84-a8001b45a4f0</entry>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <entry name="uuid">4c51c9df-777a-497d-bf84-a8001b45a4f0</entry>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </system>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <os>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </os>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <features>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </features>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk.config"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:95:fd:24"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap783ccf1e-b8"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/console.log" append="off"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       </target>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/console.log" append="off"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </console>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </input>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <video>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </video>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]: </domain>
Oct 09 16:32:47 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Oct 09 16:32:47 compute-0 nova_compute[117331]: 2025-10-09 16:32:47.140 2 DEBUG nova.virt.libvirt.migration [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <name>instance-00000016</name>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <uuid>4c51c9df-777a-497d-bf84-a8001b45a4f0</uuid>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-445686334</nova:name>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:31:51</nova:creationTime>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:32:47 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:32:47 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:user uuid="1c793380a6e945d69dacfd07f1f156f8">tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin</nova:user>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:project uuid="6d67ac3076434a4582e5db1ca7d043ff">tempest-TestExecuteWorkloadBalanceStrategy-2100042169</nova:project>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:port uuid="783ccf1e-b85c-4420-b3e9-705ddf5495d3">
Oct 09 16:32:47 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <system>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <entry name="serial">4c51c9df-777a-497d-bf84-a8001b45a4f0</entry>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <entry name="uuid">4c51c9df-777a-497d-bf84-a8001b45a4f0</entry>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </system>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <os>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </os>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <features>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </features>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk.config"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:95:fd:24"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap783ccf1e-b8"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/console.log" append="off"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       </target>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/console.log" append="off"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </console>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </input>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <video>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </video>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]: </domain>
Oct 09 16:32:47 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Oct 09 16:32:47 compute-0 nova_compute[117331]: 2025-10-09 16:32:47.141 2 DEBUG nova.virt.libvirt.migration [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _update_pci_xml output xml=<domain type="kvm">
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <name>instance-00000016</name>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <uuid>4c51c9df-777a-497d-bf84-a8001b45a4f0</uuid>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-445686334</nova:name>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:31:51</nova:creationTime>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:32:47 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:32:47 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:user uuid="1c793380a6e945d69dacfd07f1f156f8">tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin</nova:user>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:project uuid="6d67ac3076434a4582e5db1ca7d043ff">tempest-TestExecuteWorkloadBalanceStrategy-2100042169</nova:project>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <nova:port uuid="783ccf1e-b85c-4420-b3e9-705ddf5495d3">
Oct 09 16:32:47 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <system>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <entry name="serial">4c51c9df-777a-497d-bf84-a8001b45a4f0</entry>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <entry name="uuid">4c51c9df-777a-497d-bf84-a8001b45a4f0</entry>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </system>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <os>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </os>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <features>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </features>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/disk.config"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:95:fd:24"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap783ccf1e-b8"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/console.log" append="off"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:32:47 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       </target>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0/console.log" append="off"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </console>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </input>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <video>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </video>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:32:47 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:32:47 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:32:47 compute-0 nova_compute[117331]: </domain>
Oct 09 16:32:47 compute-0 nova_compute[117331]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Oct 09 16:32:47 compute-0 nova_compute[117331]: 2025-10-09 16:32:47.141 2 DEBUG nova.virt.libvirt.driver [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Oct 09 16:32:47 compute-0 nova_compute[117331]: 2025-10-09 16:32:47.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:47 compute-0 nova_compute[117331]: 2025-10-09 16:32:47.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:47 compute-0 sshd-session[150462]: Invalid user mssql from 36.224.53.32 port 53774
Oct 09 16:32:47 compute-0 nova_compute[117331]: 2025-10-09 16:32:47.608 2 DEBUG nova.virt.libvirt.migration [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Current None elapsed 1 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:32:47 compute-0 nova_compute[117331]: 2025-10-09 16:32:47.609 2 INFO nova.virt.libvirt.migration [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Increasing downtime to 50 ms after 0 sec elapsed time
Oct 09 16:32:48 compute-0 nova_compute[117331]: 2025-10-09 16:32:48.627 2 INFO nova.virt.libvirt.driver [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Oct 09 16:32:48 compute-0 sshd-session[150462]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:32:48 compute-0 sshd-session[150462]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:32:49 compute-0 kernel: tap783ccf1e-b8 (unregistering): left promiscuous mode
Oct 09 16:32:49 compute-0 NetworkManager[1028]: <info>  [1760027569.1002] device (tap783ccf1e-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:49 compute-0 ovn_controller[19752]: 2025-10-09T16:32:49Z|00213|binding|INFO|Releasing lport 783ccf1e-b85c-4420-b3e9-705ddf5495d3 from this chassis (sb_readonly=0)
Oct 09 16:32:49 compute-0 ovn_controller[19752]: 2025-10-09T16:32:49Z|00214|binding|INFO|Setting lport 783ccf1e-b85c-4420-b3e9-705ddf5495d3 down in Southbound
Oct 09 16:32:49 compute-0 ovn_controller[19752]: 2025-10-09T16:32:49Z|00215|binding|INFO|Removing iface tap783ccf1e-b8 ovn-installed in OVS
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:49 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:49.118 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:fd:24 10.100.0.10'], port_security=['fa:16:3e:95:fd:24 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '2bd8bf21-1f6b-42c9-9656-9a72fa8dcbf3'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4c51c9df-777a-497d-bf84-a8001b45a4f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54b37568-476a-40a0-b545-fe5401f85653', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d67ac3076434a4582e5db1ca7d043ff', 'neutron:revision_number': '10', 'neutron:security_group_ids': '94abd997-3903-47b3-abe3-e283a4232c96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc641581-8c4b-4cef-982e-bb0cf11a52ba, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=783ccf1e-b85c-4420-b3e9-705ddf5495d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:32:49 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:49.119 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 783ccf1e-b85c-4420-b3e9-705ddf5495d3 in datapath 54b37568-476a-40a0-b545-fe5401f85653 unbound from our chassis
Oct 09 16:32:49 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:49.121 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 54b37568-476a-40a0-b545-fe5401f85653
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:49 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:49.147 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[b9cf0963-04c9-4bce-b340-2f1761c393c5]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:32:49 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct 09 16:32:49 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000016.scope: Consumed 14.302s CPU time.
Oct 09 16:32:49 compute-0 systemd-machined[77487]: Machine qemu-16-instance-00000016 terminated.
Oct 09 16:32:49 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:49.181 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[65ea1be9-7d33-437d-a139-f038677d91ba]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:32:49 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:49.183 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[382e68e4-9d3b-4db3-9684-000019ca12f6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:32:49 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:49.211 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[04e7e0a8-0261-4cbc-bbd9-dcbccefaaa0c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:32:49 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:49.228 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ce1ebb28-1012-4217-8aeb-4b36f4bc640b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54b37568-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:3a:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 245852, 'reachable_time': 39101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 150487, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:32:49 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:49.245 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[95e5e879-fa13-46e1-912b-7ff46c1b5507]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap54b37568-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 245863, 'tstamp': 245863}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 150488, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap54b37568-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 245867, 'tstamp': 245867}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 150488, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:32:49 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:49.247 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54b37568-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:49 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:49.273 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54b37568-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:32:49 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:49.273 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:32:49 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:49.274 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap54b37568-40, col_values=(('external_ids', {'iface-id': '01dee844-02ac-4caa-80f7-8a16019cbd9d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:32:49 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:49.274 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:32:49 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:32:49.275 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[8787b2b5-8b36-45c1-9c6c-1a40ffcdcf15]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-54b37568-476a-40a0-b545-fe5401f85653\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 54b37568-476a-40a0-b545-fe5401f85653\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.292 2 DEBUG nova.compute.manager [req-6a78d033-4949-42c6-84f9-17da1e29b3d5 req-53310f19-5088-4cc6-8885-0708e0f4bf14 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received event network-vif-unplugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:32:49 compute-0 kernel: tap783ccf1e-b8: entered promiscuous mode
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.293 2 DEBUG oslo_concurrency.lockutils [req-6a78d033-4949-42c6-84f9-17da1e29b3d5 req-53310f19-5088-4cc6-8885-0708e0f4bf14 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.294 2 DEBUG oslo_concurrency.lockutils [req-6a78d033-4949-42c6-84f9-17da1e29b3d5 req-53310f19-5088-4cc6-8885-0708e0f4bf14 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.294 2 DEBUG oslo_concurrency.lockutils [req-6a78d033-4949-42c6-84f9-17da1e29b3d5 req-53310f19-5088-4cc6-8885-0708e0f4bf14 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:49 compute-0 kernel: tap783ccf1e-b8 (unregistering): left promiscuous mode
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.295 2 DEBUG nova.compute.manager [req-6a78d033-4949-42c6-84f9-17da1e29b3d5 req-53310f19-5088-4cc6-8885-0708e0f4bf14 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] No waiting events found dispatching network-vif-unplugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.295 2 DEBUG nova.compute.manager [req-6a78d033-4949-42c6-84f9-17da1e29b3d5 req-53310f19-5088-4cc6-8885-0708e0f4bf14 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received event network-vif-unplugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:32:49 compute-0 NetworkManager[1028]: <info>  [1760027569.2969] manager: (tap783ccf1e-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.331 2 DEBUG nova.virt.libvirt.guest [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.332 2 INFO nova.virt.libvirt.driver [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Migration operation has completed
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.332 2 INFO nova.compute.manager [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] _post_live_migration() is started..
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.334 2 DEBUG nova.virt.libvirt.driver [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.335 2 DEBUG nova.virt.libvirt.driver [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.335 2 DEBUG nova.virt.libvirt.driver [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.398 2 WARNING neutronclient.v2_0.client [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:32:49 compute-0 nova_compute[117331]: 2025-10-09 16:32:49.400 2 WARNING neutronclient.v2_0.client [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.186 2 DEBUG nova.network.neutron [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Activated binding for port 783ccf1e-b85c-4420-b3e9-705ddf5495d3 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.187 2 DEBUG nova.compute.manager [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "address": "fa:16:3e:95:fd:24", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap783ccf1e-b8", "ovs_interfaceid": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.188 2 DEBUG nova.virt.libvirt.vif [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:31:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-445686334',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-445686334',id=22,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:31:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6d67ac3076434a4582e5db1ca7d043ff',ramdisk_id='',reservation_id='r-0ja4884i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:32:27Z,user_data=None,user_id='1c793380a6e945d69dacfd07f1f156f8',uuid=4c51c9df-777a-497d-bf84-a8001b45a4f0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "address": "fa:16:3e:95:fd:24", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap783ccf1e-b8", "ovs_interfaceid": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.188 2 DEBUG nova.network.os_vif_util [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "address": "fa:16:3e:95:fd:24", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap783ccf1e-b8", "ovs_interfaceid": "783ccf1e-b85c-4420-b3e9-705ddf5495d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.189 2 DEBUG nova.network.os_vif_util [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:fd:24,bridge_name='br-int',has_traffic_filtering=True,id=783ccf1e-b85c-4420-b3e9-705ddf5495d3,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap783ccf1e-b8') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.189 2 DEBUG os_vif [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:fd:24,bridge_name='br-int',has_traffic_filtering=True,id=783ccf1e-b85c-4420-b3e9-705ddf5495d3,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap783ccf1e-b8') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.191 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap783ccf1e-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.196 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=e102b0c8-6b9c-4a2b-9eda-b0d27ccea77b) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.200 2 INFO os_vif [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:fd:24,bridge_name='br-int',has_traffic_filtering=True,id=783ccf1e-b85c-4420-b3e9-705ddf5495d3,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap783ccf1e-b8')
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.201 2 DEBUG oslo_concurrency.lockutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.202 2 DEBUG oslo_concurrency.lockutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.202 2 DEBUG oslo_concurrency.lockutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.203 2 DEBUG nova.compute.manager [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.203 2 INFO nova.virt.libvirt.driver [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Deleting instance files /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0_del
Oct 09 16:32:50 compute-0 nova_compute[117331]: 2025-10-09 16:32:50.204 2 INFO nova.virt.libvirt.driver [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Deletion of /var/lib/nova/instances/4c51c9df-777a-497d-bf84-a8001b45a4f0_del complete
Oct 09 16:32:50 compute-0 sshd-session[150462]: Failed password for invalid user mssql from 36.224.53.32 port 53774 ssh2
Oct 09 16:32:50 compute-0 podman[150502]: 2025-10-09 16:32:50.857136384 +0000 UTC m=+0.073999920 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.340 2 DEBUG nova.compute.manager [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received event network-vif-plugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.341 2 DEBUG oslo_concurrency.lockutils [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.341 2 DEBUG oslo_concurrency.lockutils [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.341 2 DEBUG oslo_concurrency.lockutils [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.341 2 DEBUG nova.compute.manager [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] No waiting events found dispatching network-vif-plugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.341 2 WARNING nova.compute.manager [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received unexpected event network-vif-plugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 for instance with vm_state active and task_state migrating.
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.341 2 DEBUG nova.compute.manager [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received event network-vif-unplugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.341 2 DEBUG oslo_concurrency.lockutils [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.342 2 DEBUG oslo_concurrency.lockutils [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.342 2 DEBUG oslo_concurrency.lockutils [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.342 2 DEBUG nova.compute.manager [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] No waiting events found dispatching network-vif-unplugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.342 2 DEBUG nova.compute.manager [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received event network-vif-unplugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.342 2 DEBUG nova.compute.manager [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received event network-vif-unplugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.342 2 DEBUG oslo_concurrency.lockutils [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.342 2 DEBUG oslo_concurrency.lockutils [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.342 2 DEBUG oslo_concurrency.lockutils [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.343 2 DEBUG nova.compute.manager [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] No waiting events found dispatching network-vif-unplugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.343 2 DEBUG nova.compute.manager [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received event network-vif-unplugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.343 2 DEBUG nova.compute.manager [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received event network-vif-plugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.343 2 DEBUG oslo_concurrency.lockutils [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.343 2 DEBUG oslo_concurrency.lockutils [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.343 2 DEBUG oslo_concurrency.lockutils [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.343 2 DEBUG nova.compute.manager [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] No waiting events found dispatching network-vif-plugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.343 2 WARNING nova.compute.manager [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received unexpected event network-vif-plugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 for instance with vm_state active and task_state migrating.
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.344 2 DEBUG nova.compute.manager [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received event network-vif-plugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.344 2 DEBUG oslo_concurrency.lockutils [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.344 2 DEBUG oslo_concurrency.lockutils [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.344 2 DEBUG oslo_concurrency.lockutils [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.344 2 DEBUG nova.compute.manager [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] No waiting events found dispatching network-vif-plugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:32:51 compute-0 nova_compute[117331]: 2025-10-09 16:32:51.344 2 WARNING nova.compute.manager [req-812310f2-4316-4d7b-a80f-280e98a63340 req-2caafaf6-3399-4227-8f4c-96c7ab41c69b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Received unexpected event network-vif-plugged-783ccf1e-b85c-4420-b3e9-705ddf5495d3 for instance with vm_state active and task_state migrating.
Oct 09 16:32:51 compute-0 sshd-session[150462]: Connection closed by invalid user mssql 36.224.53.32 port 53774 [preauth]
Oct 09 16:32:52 compute-0 nova_compute[117331]: 2025-10-09 16:32:52.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:54 compute-0 sshd-session[150501]: Invalid user storm from 36.224.53.32 port 34162
Oct 09 16:32:54 compute-0 podman[150523]: 2025-10-09 16:32:54.356822801 +0000 UTC m=+0.059730817 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:32:55 compute-0 sshd-session[150501]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:32:55 compute-0 sshd-session[150501]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:32:55 compute-0 nova_compute[117331]: 2025-10-09 16:32:55.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:57 compute-0 nova_compute[117331]: 2025-10-09 16:32:57.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:32:57 compute-0 sshd-session[150501]: Failed password for invalid user storm from 36.224.53.32 port 34162 ssh2
Oct 09 16:32:58 compute-0 sshd-session[150501]: Connection closed by invalid user storm 36.224.53.32 port 34162 [preauth]
Oct 09 16:32:58 compute-0 nova_compute[117331]: 2025-10-09 16:32:58.762 2 DEBUG oslo_concurrency.lockutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:58 compute-0 nova_compute[117331]: 2025-10-09 16:32:58.762 2 DEBUG oslo_concurrency.lockutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:58 compute-0 nova_compute[117331]: 2025-10-09 16:32:58.762 2 DEBUG oslo_concurrency.lockutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "4c51c9df-777a-497d-bf84-a8001b45a4f0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:59 compute-0 nova_compute[117331]: 2025-10-09 16:32:59.275 2 DEBUG oslo_concurrency.lockutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:32:59 compute-0 nova_compute[117331]: 2025-10-09 16:32:59.276 2 DEBUG oslo_concurrency.lockutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:32:59 compute-0 nova_compute[117331]: 2025-10-09 16:32:59.276 2 DEBUG oslo_concurrency.lockutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:32:59 compute-0 nova_compute[117331]: 2025-10-09 16:32:59.276 2 DEBUG nova.compute.resource_tracker [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:32:59 compute-0 podman[127775]: time="2025-10-09T16:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:32:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20748 "" "Go-http-client/1.1"
Oct 09 16:32:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3492 "" "Go-http-client/1.1"
Oct 09 16:33:00 compute-0 nova_compute[117331]: 2025-10-09 16:33:00.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:00 compute-0 nova_compute[117331]: 2025-10-09 16:33:00.322 2 DEBUG oslo_concurrency.processutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:33:00 compute-0 nova_compute[117331]: 2025-10-09 16:33:00.390 2 DEBUG oslo_concurrency.processutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:33:00 compute-0 nova_compute[117331]: 2025-10-09 16:33:00.392 2 DEBUG oslo_concurrency.processutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:33:00 compute-0 nova_compute[117331]: 2025-10-09 16:33:00.443 2 DEBUG oslo_concurrency.processutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:33:00 compute-0 nova_compute[117331]: 2025-10-09 16:33:00.581 2 WARNING nova.virt.libvirt.driver [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:33:00 compute-0 nova_compute[117331]: 2025-10-09 16:33:00.583 2 DEBUG oslo_concurrency.processutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:33:00 compute-0 nova_compute[117331]: 2025-10-09 16:33:00.598 2 DEBUG oslo_concurrency.processutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.015s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:33:00 compute-0 nova_compute[117331]: 2025-10-09 16:33:00.598 2 DEBUG nova.compute.resource_tracker [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5960MB free_disk=73.2280387878418GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:33:00 compute-0 nova_compute[117331]: 2025-10-09 16:33:00.599 2 DEBUG oslo_concurrency.lockutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:00 compute-0 nova_compute[117331]: 2025-10-09 16:33:00.599 2 DEBUG oslo_concurrency.lockutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:00 compute-0 podman[150558]: 2025-10-09 16:33:00.832205053 +0000 UTC m=+0.060908924 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 09 16:33:00 compute-0 podman[150559]: 2025-10-09 16:33:00.838245695 +0000 UTC m=+0.064058555 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=watcher_latest, config_id=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.003 2 DEBUG oslo_concurrency.lockutils [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "861b59b6-7073-418e-8e9a-9281ce63040c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.003 2 DEBUG oslo_concurrency.lockutils [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "861b59b6-7073-418e-8e9a-9281ce63040c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.004 2 DEBUG oslo_concurrency.lockutils [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "861b59b6-7073-418e-8e9a-9281ce63040c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.004 2 DEBUG oslo_concurrency.lockutils [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "861b59b6-7073-418e-8e9a-9281ce63040c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.004 2 DEBUG oslo_concurrency.lockutils [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "861b59b6-7073-418e-8e9a-9281ce63040c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.019 2 INFO nova.compute.manager [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Terminating instance
Oct 09 16:33:01 compute-0 openstack_network_exporter[129925]: ERROR   16:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:33:01 compute-0 openstack_network_exporter[129925]: ERROR   16:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:33:01 compute-0 openstack_network_exporter[129925]: ERROR   16:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:33:01 compute-0 openstack_network_exporter[129925]: ERROR   16:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:33:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:33:01 compute-0 openstack_network_exporter[129925]: ERROR   16:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:33:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.543 2 DEBUG nova.compute.manager [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 09 16:33:01 compute-0 kernel: tapddaf1c36-33 (unregistering): left promiscuous mode
Oct 09 16:33:01 compute-0 NetworkManager[1028]: <info>  [1760027581.5851] device (tapddaf1c36-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:33:01 compute-0 ovn_controller[19752]: 2025-10-09T16:33:01Z|00216|binding|INFO|Releasing lport ddaf1c36-334b-432c-8d88-e0d7678f6c28 from this chassis (sb_readonly=0)
Oct 09 16:33:01 compute-0 ovn_controller[19752]: 2025-10-09T16:33:01Z|00217|binding|INFO|Setting lport ddaf1c36-334b-432c-8d88-e0d7678f6c28 down in Southbound
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:01 compute-0 ovn_controller[19752]: 2025-10-09T16:33:01Z|00218|binding|INFO|Removing iface tapddaf1c36-33 ovn-installed in OVS
Oct 09 16:33:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:01.602 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:0c:10 10.100.0.6'], port_security=['fa:16:3e:1b:0c:10 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '861b59b6-7073-418e-8e9a-9281ce63040c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54b37568-476a-40a0-b545-fe5401f85653', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d67ac3076434a4582e5db1ca7d043ff', 'neutron:revision_number': '5', 'neutron:security_group_ids': '94abd997-3903-47b3-abe3-e283a4232c96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc641581-8c4b-4cef-982e-bb0cf11a52ba, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=ddaf1c36-334b-432c-8d88-e0d7678f6c28) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:33:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:01.603 28613 INFO neutron.agent.ovn.metadata.agent [-] Port ddaf1c36-334b-432c-8d88-e0d7678f6c28 in datapath 54b37568-476a-40a0-b545-fe5401f85653 unbound from our chassis
Oct 09 16:33:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:01.604 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 54b37568-476a-40a0-b545-fe5401f85653, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:33:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:01.604 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ce52e607-6d14-42ab-9f61-7be227fc6a7b]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:01.605 28613 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-54b37568-476a-40a0-b545-fe5401f85653 namespace which is not needed anymore
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.619 2 DEBUG nova.compute.resource_tracker [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration for instance 4c51c9df-777a-497d-bf84-a8001b45a4f0 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 09 16:33:01 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000017.scope: Deactivated successfully.
Oct 09 16:33:01 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000017.scope: Consumed 13.119s CPU time.
Oct 09 16:33:01 compute-0 systemd-machined[77487]: Machine qemu-17-instance-00000017 terminated.
Oct 09 16:33:01 compute-0 neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653[150112]: [NOTICE]   (150116) : haproxy version is 3.0.5-8e879a5
Oct 09 16:33:01 compute-0 neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653[150112]: [NOTICE]   (150116) : path to executable is /usr/sbin/haproxy
Oct 09 16:33:01 compute-0 neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653[150112]: [WARNING]  (150116) : Exiting Master process...
Oct 09 16:33:01 compute-0 neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653[150112]: [ALERT]    (150116) : Current worker (150118) exited with code 143 (Terminated)
Oct 09 16:33:01 compute-0 podman[150619]: 2025-10-09 16:33:01.742363374 +0000 UTC m=+0.038013278 container kill cda4fdce59ea0b46106250ef11fdeea8d0cdca852ffb79593ac2c806a6408e2c (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007)
Oct 09 16:33:01 compute-0 neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653[150112]: [WARNING]  (150116) : All workers exited. Exiting... (0)
Oct 09 16:33:01 compute-0 systemd[1]: libpod-cda4fdce59ea0b46106250ef11fdeea8d0cdca852ffb79593ac2c806a6408e2c.scope: Deactivated successfully.
Oct 09 16:33:01 compute-0 podman[150635]: 2025-10-09 16:33:01.802010048 +0000 UTC m=+0.037158531 container died cda4fdce59ea0b46106250ef11fdeea8d0cdca852ffb79593ac2c806a6408e2c (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.812 2 INFO nova.virt.libvirt.driver [-] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Instance destroyed successfully.
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.813 2 DEBUG nova.objects.instance [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lazy-loading 'resources' on Instance uuid 861b59b6-7073-418e-8e9a-9281ce63040c obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cda4fdce59ea0b46106250ef11fdeea8d0cdca852ffb79593ac2c806a6408e2c-userdata-shm.mount: Deactivated successfully.
Oct 09 16:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e33a5bff9a3786cb3709bb5bf26926b733d50a17ec3d7b6ae96c1afcf8c70d8-merged.mount: Deactivated successfully.
Oct 09 16:33:01 compute-0 podman[150635]: 2025-10-09 16:33:01.852017356 +0000 UTC m=+0.087165749 container cleanup cda4fdce59ea0b46106250ef11fdeea8d0cdca852ffb79593ac2c806a6408e2c (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:33:01 compute-0 systemd[1]: libpod-conmon-cda4fdce59ea0b46106250ef11fdeea8d0cdca852ffb79593ac2c806a6408e2c.scope: Deactivated successfully.
Oct 09 16:33:01 compute-0 podman[150639]: 2025-10-09 16:33:01.871913508 +0000 UTC m=+0.091102715 container remove cda4fdce59ea0b46106250ef11fdeea8d0cdca852ffb79593ac2c806a6408e2c (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4)
Oct 09 16:33:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:01.877 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[e892e7b8-f86a-49f8-8f9e-cdac522ab9d2]: (4, ("Thu Oct  9 04:33:01 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653 (cda4fdce59ea0b46106250ef11fdeea8d0cdca852ffb79593ac2c806a6408e2c)\ncda4fdce59ea0b46106250ef11fdeea8d0cdca852ffb79593ac2c806a6408e2c\nThu Oct  9 04:33:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653 (cda4fdce59ea0b46106250ef11fdeea8d0cdca852ffb79593ac2c806a6408e2c)\ncda4fdce59ea0b46106250ef11fdeea8d0cdca852ffb79593ac2c806a6408e2c\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:01.879 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[c9878a0f-576c-4629-bda0-04c8ab74f4fa]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:01.879 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:33:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:01.880 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1c78f7-1728-45a6-9793-87addba8b0a3]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:01.880 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54b37568-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:01 compute-0 kernel: tap54b37568-40: left promiscuous mode
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:01.898 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[9964bca8-8def-4eb6-83b5-9fb7cb57a690]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:01.927 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff2b02d-7ae0-4e49-854f-27690b687e81]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:01.928 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[f38cc18a-16d1-408b-ad77-a84a89c892ab]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:01.944 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[90e54376-683d-442e-8dc6-96d8d1148e28]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 245846, 'reachable_time': 18171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 150687, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:01.947 28727 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-54b37568-476a-40a0-b545-fe5401f85653 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 09 16:33:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:01.947 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[3c8ddfdb-673f-459b-86b5-d21bbbf43462]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d54b37568\x2d476a\x2d40a0\x2db545\x2dfe5401f85653.mount: Deactivated successfully.
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.965 2 DEBUG nova.compute.manager [req-c6fbd17f-7dcf-4ded-86fb-1ae2c382ca17 req-cfe585a7-36ee-495d-8dae-45e1c83667ba ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Received event network-vif-unplugged-ddaf1c36-334b-432c-8d88-e0d7678f6c28 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.965 2 DEBUG oslo_concurrency.lockutils [req-c6fbd17f-7dcf-4ded-86fb-1ae2c382ca17 req-cfe585a7-36ee-495d-8dae-45e1c83667ba ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "861b59b6-7073-418e-8e9a-9281ce63040c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.965 2 DEBUG oslo_concurrency.lockutils [req-c6fbd17f-7dcf-4ded-86fb-1ae2c382ca17 req-cfe585a7-36ee-495d-8dae-45e1c83667ba ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "861b59b6-7073-418e-8e9a-9281ce63040c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.966 2 DEBUG oslo_concurrency.lockutils [req-c6fbd17f-7dcf-4ded-86fb-1ae2c382ca17 req-cfe585a7-36ee-495d-8dae-45e1c83667ba ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "861b59b6-7073-418e-8e9a-9281ce63040c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.966 2 DEBUG nova.compute.manager [req-c6fbd17f-7dcf-4ded-86fb-1ae2c382ca17 req-cfe585a7-36ee-495d-8dae-45e1c83667ba ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] No waiting events found dispatching network-vif-unplugged-ddaf1c36-334b-432c-8d88-e0d7678f6c28 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:33:01 compute-0 nova_compute[117331]: 2025-10-09 16:33:01.966 2 DEBUG nova.compute.manager [req-c6fbd17f-7dcf-4ded-86fb-1ae2c382ca17 req-cfe585a7-36ee-495d-8dae-45e1c83667ba ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Received event network-vif-unplugged-ddaf1c36-334b-432c-8d88-e0d7678f6c28 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:33:02 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:02.029 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:33:02 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:02.030 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.126 2 DEBUG nova.compute.resource_tracker [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.160 2 DEBUG nova.compute.resource_tracker [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Instance 861b59b6-7073-418e-8e9a-9281ce63040c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.161 2 DEBUG nova.compute.resource_tracker [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration 5e29434f-93a8-4705-b4f3-b51901ab57ee is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.161 2 DEBUG nova.compute.resource_tracker [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.162 2 DEBUG nova.compute.resource_tracker [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:33:00 up 42 min,  0 user,  load average: 0.55, 0.63, 0.49\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_deleting': '1', 'num_os_type_None': '1', 'num_proj_6d67ac3076434a4582e5db1ca7d043ff': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.225 2 DEBUG nova.compute.provider_tree [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.322 2 DEBUG nova.virt.libvirt.vif [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:32:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-790928385',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-790928385',id=23,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:32:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d67ac3076434a4582e5db1ca7d043ff',ramdisk_id='',reservation_id='r-fmbb4iaa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:32:17Z,user_data=None,user_id='1c793380a6e945d69dacfd07f1f156f8',uuid=861b59b6-7073-418e-8e9a-9281ce63040c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "address": "fa:16:3e:1b:0c:10", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddaf1c36-33", "ovs_interfaceid": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.323 2 DEBUG nova.network.os_vif_util [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converting VIF {"id": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "address": "fa:16:3e:1b:0c:10", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddaf1c36-33", "ovs_interfaceid": "ddaf1c36-334b-432c-8d88-e0d7678f6c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.323 2 DEBUG nova.network.os_vif_util [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:0c:10,bridge_name='br-int',has_traffic_filtering=True,id=ddaf1c36-334b-432c-8d88-e0d7678f6c28,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddaf1c36-33') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.324 2 DEBUG os_vif [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:0c:10,bridge_name='br-int',has_traffic_filtering=True,id=ddaf1c36-334b-432c-8d88-e0d7678f6c28,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddaf1c36-33') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.325 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddaf1c36-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.362 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=0c43c53c-e3ac-4495-8c7c-c708c0854d05) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.366 2 INFO os_vif [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:0c:10,bridge_name='br-int',has_traffic_filtering=True,id=ddaf1c36-334b-432c-8d88-e0d7678f6c28,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddaf1c36-33')
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.366 2 INFO nova.virt.libvirt.driver [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Deleting instance files /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c_del
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.367 2 INFO nova.virt.libvirt.driver [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Deletion of /var/lib/nova/instances/861b59b6-7073-418e-8e9a-9281ce63040c_del complete
Oct 09 16:33:02 compute-0 unix_chkpwd[150690]: password check failed for user (root)
Oct 09 16:33:02 compute-0 sshd-session[150548]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32  user=root
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.734 2 DEBUG nova.scheduler.client.report [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.880 2 INFO nova.compute.manager [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Took 1.34 seconds to destroy the instance on the hypervisor.
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.881 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.882 2 DEBUG nova.compute.manager [-] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.882 2 DEBUG nova.network.neutron [-] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 09 16:33:02 compute-0 nova_compute[117331]: 2025-10-09 16:33:02.882 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:33:03 compute-0 nova_compute[117331]: 2025-10-09 16:33:03.198 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:33:03 compute-0 nova_compute[117331]: 2025-10-09 16:33:03.242 2 DEBUG nova.compute.resource_tracker [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:33:03 compute-0 nova_compute[117331]: 2025-10-09 16:33:03.242 2 DEBUG oslo_concurrency.lockutils [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.643s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:03 compute-0 nova_compute[117331]: 2025-10-09 16:33:03.258 2 INFO nova.compute.manager [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Oct 09 16:33:03 compute-0 nova_compute[117331]: 2025-10-09 16:33:03.561 2 DEBUG nova.compute.manager [req-d1545d32-d6d3-4b65-9aa1-f57ca8bbb342 req-a3263bae-05d6-43b1-b807-c9284d75fcef ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Received event network-vif-deleted-ddaf1c36-334b-432c-8d88-e0d7678f6c28 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:33:03 compute-0 nova_compute[117331]: 2025-10-09 16:33:03.562 2 INFO nova.compute.manager [req-d1545d32-d6d3-4b65-9aa1-f57ca8bbb342 req-a3263bae-05d6-43b1-b807-c9284d75fcef ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Neutron deleted interface ddaf1c36-334b-432c-8d88-e0d7678f6c28; detaching it from the instance and deleting it from the info cache
Oct 09 16:33:03 compute-0 nova_compute[117331]: 2025-10-09 16:33:03.562 2 DEBUG nova.network.neutron [req-d1545d32-d6d3-4b65-9aa1-f57ca8bbb342 req-a3263bae-05d6-43b1-b807-c9284d75fcef ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:33:04 compute-0 nova_compute[117331]: 2025-10-09 16:33:04.011 2 DEBUG nova.network.neutron [-] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:33:04 compute-0 nova_compute[117331]: 2025-10-09 16:33:04.029 2 DEBUG nova.compute.manager [req-0e9ce930-3e60-461a-8ffd-14c0d89211dd req-94a57894-4613-40cb-b85f-bdfd0c945f02 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Received event network-vif-unplugged-ddaf1c36-334b-432c-8d88-e0d7678f6c28 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:33:04 compute-0 nova_compute[117331]: 2025-10-09 16:33:04.029 2 DEBUG oslo_concurrency.lockutils [req-0e9ce930-3e60-461a-8ffd-14c0d89211dd req-94a57894-4613-40cb-b85f-bdfd0c945f02 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "861b59b6-7073-418e-8e9a-9281ce63040c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:04 compute-0 nova_compute[117331]: 2025-10-09 16:33:04.030 2 DEBUG oslo_concurrency.lockutils [req-0e9ce930-3e60-461a-8ffd-14c0d89211dd req-94a57894-4613-40cb-b85f-bdfd0c945f02 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "861b59b6-7073-418e-8e9a-9281ce63040c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:04 compute-0 nova_compute[117331]: 2025-10-09 16:33:04.030 2 DEBUG oslo_concurrency.lockutils [req-0e9ce930-3e60-461a-8ffd-14c0d89211dd req-94a57894-4613-40cb-b85f-bdfd0c945f02 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "861b59b6-7073-418e-8e9a-9281ce63040c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:04 compute-0 nova_compute[117331]: 2025-10-09 16:33:04.030 2 DEBUG nova.compute.manager [req-0e9ce930-3e60-461a-8ffd-14c0d89211dd req-94a57894-4613-40cb-b85f-bdfd0c945f02 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] No waiting events found dispatching network-vif-unplugged-ddaf1c36-334b-432c-8d88-e0d7678f6c28 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:33:04 compute-0 nova_compute[117331]: 2025-10-09 16:33:04.030 2 DEBUG nova.compute.manager [req-0e9ce930-3e60-461a-8ffd-14c0d89211dd req-94a57894-4613-40cb-b85f-bdfd0c945f02 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Received event network-vif-unplugged-ddaf1c36-334b-432c-8d88-e0d7678f6c28 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:33:04 compute-0 nova_compute[117331]: 2025-10-09 16:33:04.071 2 DEBUG nova.compute.manager [req-d1545d32-d6d3-4b65-9aa1-f57ca8bbb342 req-a3263bae-05d6-43b1-b807-c9284d75fcef ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Detach interface failed, port_id=ddaf1c36-334b-432c-8d88-e0d7678f6c28, reason: Instance 861b59b6-7073-418e-8e9a-9281ce63040c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 09 16:33:04 compute-0 nova_compute[117331]: 2025-10-09 16:33:04.318 2 INFO nova.scheduler.client.report [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Deleted allocation for migration 5e29434f-93a8-4705-b4f3-b51901ab57ee
Oct 09 16:33:04 compute-0 nova_compute[117331]: 2025-10-09 16:33:04.319 2 DEBUG nova.virt.libvirt.driver [None req-a98bf568-4bf7-459f-a8c7-be928ce9dc17 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 4c51c9df-777a-497d-bf84-a8001b45a4f0] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Oct 09 16:33:04 compute-0 nova_compute[117331]: 2025-10-09 16:33:04.516 2 INFO nova.compute.manager [-] [instance: 861b59b6-7073-418e-8e9a-9281ce63040c] Took 1.63 seconds to deallocate network for instance.
Oct 09 16:33:04 compute-0 sshd-session[150548]: Failed password for root from 36.224.53.32 port 42116 ssh2
Oct 09 16:33:05 compute-0 nova_compute[117331]: 2025-10-09 16:33:05.038 2 DEBUG oslo_concurrency.lockutils [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:05 compute-0 nova_compute[117331]: 2025-10-09 16:33:05.039 2 DEBUG oslo_concurrency.lockutils [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:05 compute-0 nova_compute[117331]: 2025-10-09 16:33:05.082 2 DEBUG nova.compute.provider_tree [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:33:05 compute-0 nova_compute[117331]: 2025-10-09 16:33:05.605 2 DEBUG nova.scheduler.client.report [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:33:06 compute-0 nova_compute[117331]: 2025-10-09 16:33:06.113 2 DEBUG oslo_concurrency.lockutils [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.074s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:06 compute-0 nova_compute[117331]: 2025-10-09 16:33:06.141 2 INFO nova.scheduler.client.report [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Deleted allocations for instance 861b59b6-7073-418e-8e9a-9281ce63040c
Oct 09 16:33:06 compute-0 sshd-session[150548]: Connection closed by authenticating user root 36.224.53.32 port 42116 [preauth]
Oct 09 16:33:07 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:07.032 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:07 compute-0 nova_compute[117331]: 2025-10-09 16:33:07.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:07 compute-0 nova_compute[117331]: 2025-10-09 16:33:07.165 2 DEBUG oslo_concurrency.lockutils [None req-a86bc30b-4ad1-4ca4-9471-a69a8079e1d0 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "861b59b6-7073-418e-8e9a-9281ce63040c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.161s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:07 compute-0 nova_compute[117331]: 2025-10-09 16:33:07.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:08 compute-0 sshd-session[150691]: Invalid user mysql from 36.224.53.32 port 49982
Oct 09 16:33:08 compute-0 podman[150693]: 2025-10-09 16:33:08.751229847 +0000 UTC m=+0.069370884 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, version=9.6, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal)
Oct 09 16:33:08 compute-0 podman[150694]: 2025-10-09 16:33:08.782284383 +0000 UTC m=+0.106628627 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 09 16:33:09 compute-0 sshd-session[150691]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:33:09 compute-0 sshd-session[150691]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:33:12 compute-0 sshd-session[150691]: Failed password for invalid user mysql from 36.224.53.32 port 49982 ssh2
Oct 09 16:33:12 compute-0 nova_compute[117331]: 2025-10-09 16:33:12.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:12 compute-0 nova_compute[117331]: 2025-10-09 16:33:12.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:13 compute-0 sshd-session[150691]: Connection closed by invalid user mysql 36.224.53.32 port 49982 [preauth]
Oct 09 16:33:13 compute-0 nova_compute[117331]: 2025-10-09 16:33:13.986 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:33:15 compute-0 sshd-session[150741]: Invalid user git from 36.224.53.32 port 58018
Oct 09 16:33:16 compute-0 sshd-session[150741]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:33:16 compute-0 sshd-session[150741]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:33:17 compute-0 nova_compute[117331]: 2025-10-09 16:33:17.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:17 compute-0 nova_compute[117331]: 2025-10-09 16:33:17.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:18 compute-0 sshd-session[150741]: Failed password for invalid user git from 36.224.53.32 port 58018 ssh2
Oct 09 16:33:18 compute-0 sshd-session[150741]: Connection closed by invalid user git 36.224.53.32 port 58018 [preauth]
Oct 09 16:33:19 compute-0 nova_compute[117331]: 2025-10-09 16:33:19.174 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:19 compute-0 nova_compute[117331]: 2025-10-09 16:33:19.174 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:19 compute-0 nova_compute[117331]: 2025-10-09 16:33:19.684 2 DEBUG nova.compute.manager [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:33:20 compute-0 nova_compute[117331]: 2025-10-09 16:33:20.227 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:20 compute-0 nova_compute[117331]: 2025-10-09 16:33:20.227 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:20 compute-0 nova_compute[117331]: 2025-10-09 16:33:20.232 2 DEBUG nova.virt.hardware [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:33:20 compute-0 nova_compute[117331]: 2025-10-09 16:33:20.233 2 INFO nova.compute.claims [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:33:20 compute-0 sshd-session[150743]: Invalid user cs2server from 36.224.53.32 port 37516
Oct 09 16:33:21 compute-0 podman[150745]: 2025-10-09 16:33:21.061839829 +0000 UTC m=+0.066795182 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Oct 09 16:33:21 compute-0 nova_compute[117331]: 2025-10-09 16:33:21.277 2 DEBUG nova.compute.provider_tree [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:33:21 compute-0 nova_compute[117331]: 2025-10-09 16:33:21.785 2 DEBUG nova.scheduler.client.report [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:33:21 compute-0 sshd-session[150743]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:33:21 compute-0 sshd-session[150743]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:33:22 compute-0 nova_compute[117331]: 2025-10-09 16:33:22.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:22 compute-0 nova_compute[117331]: 2025-10-09 16:33:22.296 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.069s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:22 compute-0 nova_compute[117331]: 2025-10-09 16:33:22.297 2 DEBUG nova.compute.manager [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:33:22 compute-0 nova_compute[117331]: 2025-10-09 16:33:22.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:22 compute-0 nova_compute[117331]: 2025-10-09 16:33:22.808 2 DEBUG nova.compute.manager [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:33:22 compute-0 nova_compute[117331]: 2025-10-09 16:33:22.809 2 DEBUG nova.network.neutron [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:33:22 compute-0 nova_compute[117331]: 2025-10-09 16:33:22.809 2 WARNING neutronclient.v2_0.client [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:33:22 compute-0 nova_compute[117331]: 2025-10-09 16:33:22.809 2 WARNING neutronclient.v2_0.client [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:33:23 compute-0 nova_compute[117331]: 2025-10-09 16:33:23.305 2 DEBUG nova.network.neutron [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Successfully created port: 8bc917d5-4807-4e8c-8941-29822aa14c94 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:33:23 compute-0 nova_compute[117331]: 2025-10-09 16:33:23.316 2 INFO nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:33:23 compute-0 nova_compute[117331]: 2025-10-09 16:33:23.823 2 DEBUG nova.compute.manager [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.437 2 DEBUG nova.network.neutron [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Successfully updated port: 8bc917d5-4807-4e8c-8941-29822aa14c94 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:33:24 compute-0 sshd-session[150743]: Failed password for invalid user cs2server from 36.224.53.32 port 37516 ssh2
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.495 2 DEBUG nova.compute.manager [req-fcf327db-1296-4d6e-8214-ab2830aaea29 req-ed7cdb22-d766-4617-b86f-847237f172a4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received event network-changed-8bc917d5-4807-4e8c-8941-29822aa14c94 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.495 2 DEBUG nova.compute.manager [req-fcf327db-1296-4d6e-8214-ab2830aaea29 req-ed7cdb22-d766-4617-b86f-847237f172a4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Refreshing instance network info cache due to event network-changed-8bc917d5-4807-4e8c-8941-29822aa14c94. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.495 2 DEBUG oslo_concurrency.lockutils [req-fcf327db-1296-4d6e-8214-ab2830aaea29 req-ed7cdb22-d766-4617-b86f-847237f172a4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-e08b0fc8-1e1e-4d97-b613-761b8f6f9674" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.495 2 DEBUG oslo_concurrency.lockutils [req-fcf327db-1296-4d6e-8214-ab2830aaea29 req-ed7cdb22-d766-4617-b86f-847237f172a4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-e08b0fc8-1e1e-4d97-b613-761b8f6f9674" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.496 2 DEBUG nova.network.neutron [req-fcf327db-1296-4d6e-8214-ab2830aaea29 req-ed7cdb22-d766-4617-b86f-847237f172a4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Refreshing network info cache for port 8bc917d5-4807-4e8c-8941-29822aa14c94 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.845 2 DEBUG nova.compute.manager [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.848 2 DEBUG nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.848 2 INFO nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Creating image(s)
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.849 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "/var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.850 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "/var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.851 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "/var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.852 2 DEBUG oslo_utils.imageutils.format_inspector [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.860 2 DEBUG oslo_utils.imageutils.format_inspector [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.863 2 DEBUG oslo_concurrency.processutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:33:24 compute-0 podman[150765]: 2025-10-09 16:33:24.870555249 +0000 UTC m=+0.089079700 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.921 2 DEBUG oslo_concurrency.processutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.922 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.923 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.923 2 DEBUG oslo_utils.imageutils.format_inspector [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.927 2 DEBUG oslo_utils.imageutils.format_inspector [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.928 2 DEBUG oslo_concurrency.processutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.943 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "refresh_cache-e08b0fc8-1e1e-4d97-b613-761b8f6f9674" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.983 2 DEBUG oslo_concurrency.processutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:33:24 compute-0 nova_compute[117331]: 2025-10-09 16:33:24.983 2 DEBUG oslo_concurrency.processutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:33:25 compute-0 nova_compute[117331]: 2025-10-09 16:33:25.000 2 WARNING neutronclient.v2_0.client [req-fcf327db-1296-4d6e-8214-ab2830aaea29 req-ed7cdb22-d766-4617-b86f-847237f172a4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:33:25 compute-0 nova_compute[117331]: 2025-10-09 16:33:25.018 2 DEBUG oslo_concurrency.processutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk 1073741824" returned: 0 in 0.034s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:33:25 compute-0 nova_compute[117331]: 2025-10-09 16:33:25.018 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.096s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:25 compute-0 nova_compute[117331]: 2025-10-09 16:33:25.019 2 DEBUG oslo_concurrency.processutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:33:25 compute-0 nova_compute[117331]: 2025-10-09 16:33:25.068 2 DEBUG oslo_concurrency.processutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:33:25 compute-0 nova_compute[117331]: 2025-10-09 16:33:25.069 2 DEBUG nova.virt.disk.api [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Checking if we can resize image /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:33:25 compute-0 nova_compute[117331]: 2025-10-09 16:33:25.069 2 DEBUG oslo_concurrency.processutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:33:25 compute-0 nova_compute[117331]: 2025-10-09 16:33:25.119 2 DEBUG oslo_concurrency.processutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:33:25 compute-0 nova_compute[117331]: 2025-10-09 16:33:25.121 2 DEBUG nova.virt.disk.api [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Cannot resize image /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:33:25 compute-0 nova_compute[117331]: 2025-10-09 16:33:25.122 2 DEBUG nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:33:25 compute-0 nova_compute[117331]: 2025-10-09 16:33:25.123 2 DEBUG nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Ensure instance console log exists: /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:33:25 compute-0 nova_compute[117331]: 2025-10-09 16:33:25.124 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:25 compute-0 nova_compute[117331]: 2025-10-09 16:33:25.124 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:25 compute-0 nova_compute[117331]: 2025-10-09 16:33:25.125 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:25 compute-0 nova_compute[117331]: 2025-10-09 16:33:25.191 2 DEBUG nova.network.neutron [req-fcf327db-1296-4d6e-8214-ab2830aaea29 req-ed7cdb22-d766-4617-b86f-847237f172a4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:33:25 compute-0 nova_compute[117331]: 2025-10-09 16:33:25.339 2 DEBUG nova.network.neutron [req-fcf327db-1296-4d6e-8214-ab2830aaea29 req-ed7cdb22-d766-4617-b86f-847237f172a4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:33:25 compute-0 nova_compute[117331]: 2025-10-09 16:33:25.849 2 DEBUG oslo_concurrency.lockutils [req-fcf327db-1296-4d6e-8214-ab2830aaea29 req-ed7cdb22-d766-4617-b86f-847237f172a4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-e08b0fc8-1e1e-4d97-b613-761b8f6f9674" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:33:25 compute-0 nova_compute[117331]: 2025-10-09 16:33:25.850 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquired lock "refresh_cache-e08b0fc8-1e1e-4d97-b613-761b8f6f9674" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:33:25 compute-0 nova_compute[117331]: 2025-10-09 16:33:25.850 2 DEBUG nova.network.neutron [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:33:26 compute-0 nova_compute[117331]: 2025-10-09 16:33:26.308 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:33:26 compute-0 sshd-session[150743]: Connection closed by invalid user cs2server 36.224.53.32 port 37516 [preauth]
Oct 09 16:33:27 compute-0 nova_compute[117331]: 2025-10-09 16:33:27.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:27 compute-0 nova_compute[117331]: 2025-10-09 16:33:27.190 2 DEBUG nova.network.neutron [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:33:27 compute-0 nova_compute[117331]: 2025-10-09 16:33:27.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:33:27 compute-0 nova_compute[117331]: 2025-10-09 16:33:27.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:27 compute-0 nova_compute[117331]: 2025-10-09 16:33:27.420 2 WARNING neutronclient.v2_0.client [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:33:27 compute-0 nova_compute[117331]: 2025-10-09 16:33:27.697 2 DEBUG nova.network.neutron [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Updating instance_info_cache with network_info: [{"id": "8bc917d5-4807-4e8c-8941-29822aa14c94", "address": "fa:16:3e:a0:cf:4a", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bc917d5-48", "ovs_interfaceid": "8bc917d5-4807-4e8c-8941-29822aa14c94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.205 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Releasing lock "refresh_cache-e08b0fc8-1e1e-4d97-b613-761b8f6f9674" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.206 2 DEBUG nova.compute.manager [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Instance network_info: |[{"id": "8bc917d5-4807-4e8c-8941-29822aa14c94", "address": "fa:16:3e:a0:cf:4a", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bc917d5-48", "ovs_interfaceid": "8bc917d5-4807-4e8c-8941-29822aa14c94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.210 2 DEBUG nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Start _get_guest_xml network_info=[{"id": "8bc917d5-4807-4e8c-8941-29822aa14c94", "address": "fa:16:3e:a0:cf:4a", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bc917d5-48", "ovs_interfaceid": "8bc917d5-4807-4e8c-8941-29822aa14c94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.216 2 WARNING nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.219 2 DEBUG nova.virt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteWorkloadBalanceStrategy-server-487819226', uuid='e08b0fc8-1e1e-4d97-b613-761b8f6f9674'), owner=OwnerMeta(userid='1c793380a6e945d69dacfd07f1f156f8', username='tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin', projectid='6d67ac3076434a4582e5db1ca7d043ff', projectname='tempest-TestExecuteWorkloadBalanceStrategy-2100042169'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='tempest-watcher_flavor-1801535589', flavorid='91ea4b87-742a-4f2b-9b5d-34de6e13f85a', memory_mb=1151, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={}, swap=0), network_info=[{"id": "8bc917d5-4807-4e8c-8941-29822aa14c94", "address": "fa:16:3e:a0:cf:4a", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bc917d5-48", "ovs_interfaceid": "8bc917d5-4807-4e8c-8941-29822aa14c94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760027608.2188506) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.226 2 DEBUG nova.virt.libvirt.host [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.228 2 DEBUG nova.virt.libvirt.host [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.232 2 DEBUG nova.virt.libvirt.host [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.232 2 DEBUG nova.virt.libvirt.host [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.233 2 DEBUG nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.233 2 DEBUG nova.virt.hardware [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:33:16Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={},flavorid='91ea4b87-742a-4f2b-9b5d-34de6e13f85a',id=3,is_public=True,memory_mb=1151,name='tempest-watcher_flavor-1801535589',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.234 2 DEBUG nova.virt.hardware [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.234 2 DEBUG nova.virt.hardware [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.234 2 DEBUG nova.virt.hardware [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.234 2 DEBUG nova.virt.hardware [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.234 2 DEBUG nova.virt.hardware [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.235 2 DEBUG nova.virt.hardware [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.235 2 DEBUG nova.virt.hardware [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.235 2 DEBUG nova.virt.hardware [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.235 2 DEBUG nova.virt.hardware [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.236 2 DEBUG nova.virt.hardware [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.239 2 DEBUG nova.virt.libvirt.vif [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:33:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-487819226',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-487819226',id=24,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=1151,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d67ac3076434a4582e5db1ca7d043ff',ramdisk_id='',reservation_id='r-2km05070',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:33:23Z,user_data=None,user_id='1c793380a6e945d69dacfd07f1f156f8',uuid=e08b0fc8-1e1e-4d97-b613-761b8f6f9674,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8bc917d5-4807-4e8c-8941-29822aa14c94", "address": "fa:16:3e:a0:cf:4a", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bc917d5-48", "ovs_interfaceid": "8bc917d5-4807-4e8c-8941-29822aa14c94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.240 2 DEBUG nova.network.os_vif_util [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converting VIF {"id": "8bc917d5-4807-4e8c-8941-29822aa14c94", "address": "fa:16:3e:a0:cf:4a", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bc917d5-48", "ovs_interfaceid": "8bc917d5-4807-4e8c-8941-29822aa14c94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.241 2 DEBUG nova.network.os_vif_util [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:cf:4a,bridge_name='br-int',has_traffic_filtering=True,id=8bc917d5-4807-4e8c-8941-29822aa14c94,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bc917d5-48') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.241 2 DEBUG nova.objects.instance [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lazy-loading 'pci_devices' on Instance uuid e08b0fc8-1e1e-4d97-b613-761b8f6f9674 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.750 2 DEBUG nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:33:28 compute-0 nova_compute[117331]:   <uuid>e08b0fc8-1e1e-4d97-b613-761b8f6f9674</uuid>
Oct 09 16:33:28 compute-0 nova_compute[117331]:   <name>instance-00000018</name>
Oct 09 16:33:28 compute-0 nova_compute[117331]:   <memory>1178624</memory>
Oct 09 16:33:28 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:33:28 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-487819226</nova:name>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:33:28</nova:creationTime>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <nova:flavor name="tempest-watcher_flavor-1801535589" id="91ea4b87-742a-4f2b-9b5d-34de6e13f85a">
Oct 09 16:33:28 compute-0 nova_compute[117331]:         <nova:memory>1151</nova:memory>
Oct 09 16:33:28 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:33:28 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:33:28 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:33:28 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:33:28 compute-0 nova_compute[117331]:         <nova:extraSpecs/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:33:28 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:33:28 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:33:28 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:33:28 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:33:28 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:33:28 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:33:28 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:33:28 compute-0 nova_compute[117331]:         <nova:user uuid="1c793380a6e945d69dacfd07f1f156f8">tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin</nova:user>
Oct 09 16:33:28 compute-0 nova_compute[117331]:         <nova:project uuid="6d67ac3076434a4582e5db1ca7d043ff">tempest-TestExecuteWorkloadBalanceStrategy-2100042169</nova:project>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:33:28 compute-0 nova_compute[117331]:         <nova:port uuid="8bc917d5-4807-4e8c-8941-29822aa14c94">
Oct 09 16:33:28 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:33:28 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:33:28 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <system>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <entry name="serial">e08b0fc8-1e1e-4d97-b613-761b8f6f9674</entry>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <entry name="uuid">e08b0fc8-1e1e-4d97-b613-761b8f6f9674</entry>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     </system>
Oct 09 16:33:28 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:33:28 compute-0 nova_compute[117331]:   <os>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:   </os>
Oct 09 16:33:28 compute-0 nova_compute[117331]:   <features>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:   </features>
Oct 09 16:33:28 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:33:28 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:33:28 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk.config"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:a0:cf:4a"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <target dev="tap8bc917d5-48"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/console.log" append="off"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <video>
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     </video>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:33:28 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:33:28 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:33:28 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:33:28 compute-0 nova_compute[117331]: </domain>
Oct 09 16:33:28 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.751 2 DEBUG nova.compute.manager [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Preparing to wait for external event network-vif-plugged-8bc917d5-4807-4e8c-8941-29822aa14c94 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.752 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.752 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.752 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.753 2 DEBUG nova.virt.libvirt.vif [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:33:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-487819226',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-487819226',id=24,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=1151,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d67ac3076434a4582e5db1ca7d043ff',ramdisk_id='',reservation_id='r-2km05070',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:33:23Z,user_data=None,user_id='1c793380a6e945d69dacfd07f1f156f8',uuid=e08b0fc8-1e1e-4d97-b613-761b8f6f9674,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8bc917d5-4807-4e8c-8941-29822aa14c94", "address": "fa:16:3e:a0:cf:4a", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bc917d5-48", "ovs_interfaceid": "8bc917d5-4807-4e8c-8941-29822aa14c94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.753 2 DEBUG nova.network.os_vif_util [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converting VIF {"id": "8bc917d5-4807-4e8c-8941-29822aa14c94", "address": "fa:16:3e:a0:cf:4a", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bc917d5-48", "ovs_interfaceid": "8bc917d5-4807-4e8c-8941-29822aa14c94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.754 2 DEBUG nova.network.os_vif_util [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:cf:4a,bridge_name='br-int',has_traffic_filtering=True,id=8bc917d5-4807-4e8c-8941-29822aa14c94,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bc917d5-48') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.755 2 DEBUG os_vif [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:cf:4a,bridge_name='br-int',has_traffic_filtering=True,id=8bc917d5-4807-4e8c-8941-29822aa14c94,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bc917d5-48') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.756 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.756 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.758 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '89f9d302-02f1-520d-9e3a-a8e83b753014', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.763 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8bc917d5-48, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.764 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap8bc917d5-48, col_values=(('qos', UUID('cdbd09ad-618a-48f7-82a4-935b69b66548')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.764 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap8bc917d5-48, col_values=(('external_ids', {'iface-id': '8bc917d5-4807-4e8c-8941-29822aa14c94', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a0:cf:4a', 'vm-uuid': 'e08b0fc8-1e1e-4d97-b613-761b8f6f9674'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:28 compute-0 NetworkManager[1028]: <info>  [1760027608.7671] manager: (tap8bc917d5-48): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:28 compute-0 nova_compute[117331]: 2025-10-09 16:33:28.774 2 INFO os_vif [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:cf:4a,bridge_name='br-int',has_traffic_filtering=True,id=8bc917d5-4807-4e8c-8941-29822aa14c94,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bc917d5-48')
Oct 09 16:33:29 compute-0 sshd-session[150804]: Invalid user zabbix from 36.224.53.32 port 46356
Oct 09 16:33:29 compute-0 podman[127775]: time="2025-10-09T16:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:33:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:33:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3032 "" "Go-http-client/1.1"
Oct 09 16:33:30 compute-0 nova_compute[117331]: 2025-10-09 16:33:30.303 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:33:30 compute-0 nova_compute[117331]: 2025-10-09 16:33:30.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:33:30 compute-0 nova_compute[117331]: 2025-10-09 16:33:30.306 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:33:30 compute-0 nova_compute[117331]: 2025-10-09 16:33:30.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:33:30 compute-0 nova_compute[117331]: 2025-10-09 16:33:30.321 2 DEBUG nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:33:30 compute-0 nova_compute[117331]: 2025-10-09 16:33:30.321 2 DEBUG nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:33:30 compute-0 nova_compute[117331]: 2025-10-09 16:33:30.321 2 DEBUG nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] No VIF found with MAC fa:16:3e:a0:cf:4a, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:33:30 compute-0 nova_compute[117331]: 2025-10-09 16:33:30.322 2 INFO nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Using config drive
Oct 09 16:33:30 compute-0 sshd-session[150804]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:33:30 compute-0 sshd-session[150804]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:33:30 compute-0 nova_compute[117331]: 2025-10-09 16:33:30.817 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:30 compute-0 nova_compute[117331]: 2025-10-09 16:33:30.818 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:30 compute-0 nova_compute[117331]: 2025-10-09 16:33:30.818 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:30 compute-0 nova_compute[117331]: 2025-10-09 16:33:30.818 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:33:30 compute-0 nova_compute[117331]: 2025-10-09 16:33:30.830 2 WARNING neutronclient.v2_0.client [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.380 2 INFO nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Creating config drive at /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk.config
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.385 2 DEBUG oslo_concurrency.processutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpkmochk15 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:33:31 compute-0 openstack_network_exporter[129925]: ERROR   16:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:33:31 compute-0 openstack_network_exporter[129925]: ERROR   16:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:33:31 compute-0 openstack_network_exporter[129925]: ERROR   16:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:33:31 compute-0 openstack_network_exporter[129925]: ERROR   16:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:33:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:33:31 compute-0 openstack_network_exporter[129925]: ERROR   16:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:33:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.509 2 DEBUG oslo_concurrency.processutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpkmochk15" returned: 0 in 0.124s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:33:31 compute-0 kernel: tap8bc917d5-48: entered promiscuous mode
Oct 09 16:33:31 compute-0 NetworkManager[1028]: <info>  [1760027611.5862] manager: (tap8bc917d5-48): new Tun device (/org/freedesktop/NetworkManager/Devices/82)
Oct 09 16:33:31 compute-0 ovn_controller[19752]: 2025-10-09T16:33:31Z|00219|binding|INFO|Claiming lport 8bc917d5-4807-4e8c-8941-29822aa14c94 for this chassis.
Oct 09 16:33:31 compute-0 ovn_controller[19752]: 2025-10-09T16:33:31Z|00220|binding|INFO|8bc917d5-4807-4e8c-8941-29822aa14c94: Claiming fa:16:3e:a0:cf:4a 10.100.0.4
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.595 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:cf:4a 10.100.0.4'], port_security=['fa:16:3e:a0:cf:4a 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e08b0fc8-1e1e-4d97-b613-761b8f6f9674', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54b37568-476a-40a0-b545-fe5401f85653', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d67ac3076434a4582e5db1ca7d043ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '94abd997-3903-47b3-abe3-e283a4232c96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc641581-8c4b-4cef-982e-bb0cf11a52ba, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=8bc917d5-4807-4e8c-8941-29822aa14c94) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.596 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 8bc917d5-4807-4e8c-8941-29822aa14c94 in datapath 54b37568-476a-40a0-b545-fe5401f85653 bound to our chassis
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.597 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 54b37568-476a-40a0-b545-fe5401f85653
Oct 09 16:33:31 compute-0 ovn_controller[19752]: 2025-10-09T16:33:31Z|00221|binding|INFO|Setting lport 8bc917d5-4807-4e8c-8941-29822aa14c94 ovn-installed in OVS
Oct 09 16:33:31 compute-0 ovn_controller[19752]: 2025-10-09T16:33:31Z|00222|binding|INFO|Setting lport 8bc917d5-4807-4e8c-8941-29822aa14c94 up in Southbound
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.610 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a66ce95d-a9c0-4d51-b279-7c26c5742c79]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.611 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap54b37568-41 in ovnmeta-54b37568-476a-40a0-b545-fe5401f85653 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.613 139687 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap54b37568-40 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.613 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[e3b25de5-6247-4327-a6c9-2b360fb419a6]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.613 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[289fb13a-d451-4421-8c3e-c45510e5fe74]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:31 compute-0 systemd-udevd[150845]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.625 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[53df9f7d-9007-40de-8ed7-cdef35698d06]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:31 compute-0 NetworkManager[1028]: <info>  [1760027611.6414] device (tap8bc917d5-48): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:33:31 compute-0 NetworkManager[1028]: <info>  [1760027611.6422] device (tap8bc917d5-48): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:33:31 compute-0 systemd-machined[77487]: New machine qemu-18-instance-00000018.
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.650 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8b0997-f8b4-4368-9df0-6d57200c7ff3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:31 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000018.
Oct 09 16:33:31 compute-0 podman[150820]: 2025-10-09 16:33:31.671530092 +0000 UTC m=+0.090877327 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Oct 09 16:33:31 compute-0 podman[150819]: 2025-10-09 16:33:31.671725257 +0000 UTC m=+0.092659452 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.680 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[82894eba-cada-42d1-af90-34bb7e2638f7]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:31 compute-0 systemd-udevd[150859]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.684 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[701db321-ec37-4fcb-9859-c13e2fea1752]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:31 compute-0 NetworkManager[1028]: <info>  [1760027611.6853] manager: (tap54b37568-40): new Veth device (/org/freedesktop/NetworkManager/Devices/83)
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.717 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f3cb42-6510-4611-935c-333df05c8d1e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.720 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[3d3cb5c3-983a-4681-a267-7293f78bbc6f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:31 compute-0 NetworkManager[1028]: <info>  [1760027611.7422] device (tap54b37568-40): carrier: link connected
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.748 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[2599a527-4866-4b60-8196-1c77d5c271f0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.765 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ae83f334-dae9-44f3-8709-8df6aa10a0e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54b37568-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:3a:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 255534, 'reachable_time': 20185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 150896, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.780 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[071bfc02-b138-4bb7-b1aa-2551a3f4ecfa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe50:3a49'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 255534, 'tstamp': 255534}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 150897, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.797 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[66451c07-82b5-455c-80c5-ea2644b7ef2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54b37568-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:3a:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 255534, 'reachable_time': 20185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 150898, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.829 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[9db8d4ce-3d5a-4268-84f6-bee96f204528]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.866 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.909 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3afeafa7-4d61-4256-9106-aa429128c1ff]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.911 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54b37568-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.911 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.912 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54b37568-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:31 compute-0 kernel: tap54b37568-40: entered promiscuous mode
Oct 09 16:33:31 compute-0 NetworkManager[1028]: <info>  [1760027611.9157] manager: (tap54b37568-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.918 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap54b37568-40, col_values=(('external_ids', {'iface-id': '01dee844-02ac-4caa-80f7-8a16019cbd9d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:31 compute-0 ovn_controller[19752]: 2025-10-09T16:33:31Z|00223|binding|INFO|Releasing lport 01dee844-02ac-4caa-80f7-8a16019cbd9d from this chassis (sb_readonly=0)
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.931 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.932 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.939 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[4d368150-8693-4866-8994-c9894bead045]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.940 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.940 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.940 28613 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 54b37568-476a-40a0-b545-fe5401f85653 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.940 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.941 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a1584a2b-89fb-4e97-a06b-221e029aea12]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.941 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.941 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[891351c3-938a-490f-a769-472b11cf725f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.942 28613 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: global
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     log         /dev/log local0 debug
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     log-tag     haproxy-metadata-proxy-54b37568-476a-40a0-b545-fe5401f85653
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     user        root
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     group       root
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     maxconn     1024
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     pidfile     /var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     daemon
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: defaults
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     log global
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     mode http
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     option httplog
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     option dontlognull
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     option http-server-close
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     option forwardfor
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     retries                 3
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     timeout http-request    30s
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     timeout connect         30s
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     timeout client          32s
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     timeout server          32s
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     timeout http-keep-alive 30s
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: listen listener
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     bind 169.254.169.254:80
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:     http-request add-header X-OVN-Network-ID 54b37568-476a-40a0-b545-fe5401f85653
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 09 16:33:31 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:31.943 28613 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'env', 'PROCESS_TAG=haproxy-54b37568-476a-40a0-b545-fe5401f85653', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/54b37568-476a-40a0-b545-fe5401f85653.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.988 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.992 2 DEBUG nova.compute.manager [req-1b7f324f-288b-4e97-a370-7f2095895027 req-c3c85cf4-f97c-4e7d-9847-12fce42280e8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received event network-vif-plugged-8bc917d5-4807-4e8c-8941-29822aa14c94 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.992 2 DEBUG oslo_concurrency.lockutils [req-1b7f324f-288b-4e97-a370-7f2095895027 req-c3c85cf4-f97c-4e7d-9847-12fce42280e8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.992 2 DEBUG oslo_concurrency.lockutils [req-1b7f324f-288b-4e97-a370-7f2095895027 req-c3c85cf4-f97c-4e7d-9847-12fce42280e8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.992 2 DEBUG oslo_concurrency.lockutils [req-1b7f324f-288b-4e97-a370-7f2095895027 req-c3c85cf4-f97c-4e7d-9847-12fce42280e8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:31 compute-0 nova_compute[117331]: 2025-10-09 16:33:31.993 2 DEBUG nova.compute.manager [req-1b7f324f-288b-4e97-a370-7f2095895027 req-c3c85cf4-f97c-4e7d-9847-12fce42280e8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Processing event network-vif-plugged-8bc917d5-4807-4e8c-8941-29822aa14c94 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:33:32 compute-0 nova_compute[117331]: 2025-10-09 16:33:32.148 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:33:32 compute-0 nova_compute[117331]: 2025-10-09 16:33:32.149 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:33:32 compute-0 nova_compute[117331]: 2025-10-09 16:33:32.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:32 compute-0 nova_compute[117331]: 2025-10-09 16:33:32.171 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.022s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:33:32 compute-0 nova_compute[117331]: 2025-10-09 16:33:32.172 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6162MB free_disk=73.2570915222168GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:33:32 compute-0 nova_compute[117331]: 2025-10-09 16:33:32.172 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:32 compute-0 nova_compute[117331]: 2025-10-09 16:33:32.172 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:32 compute-0 podman[150937]: 2025-10-09 16:33:32.386102271 +0000 UTC m=+0.053199000 container create 1610d2110c9c8a73ff26120c621cc4a6687ef8af17d13c702380a2f01b516bf0 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:33:32 compute-0 systemd[1]: Started libpod-conmon-1610d2110c9c8a73ff26120c621cc4a6687ef8af17d13c702380a2f01b516bf0.scope.
Oct 09 16:33:32 compute-0 podman[150937]: 2025-10-09 16:33:32.353823277 +0000 UTC m=+0.020920036 image pull 4e15c189a95c5dcd1203c76fe304de5c8f8616da9a258ca168cc54a517f49b87 38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 09 16:33:32 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:33:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ada7c0d9f4e9a0f8f130479c33c5498c4cc90405e1695e99353c1264445925f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 16:33:32 compute-0 podman[150937]: 2025-10-09 16:33:32.481123228 +0000 UTC m=+0.148219997 container init 1610d2110c9c8a73ff26120c621cc4a6687ef8af17d13c702380a2f01b516bf0 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.4)
Oct 09 16:33:32 compute-0 podman[150937]: 2025-10-09 16:33:32.488295617 +0000 UTC m=+0.155392356 container start 1610d2110c9c8a73ff26120c621cc4a6687ef8af17d13c702380a2f01b516bf0 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 09 16:33:32 compute-0 neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653[150952]: [NOTICE]   (150956) : New worker (150958) forked
Oct 09 16:33:32 compute-0 neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653[150952]: [NOTICE]   (150956) : Loading success.
Oct 09 16:33:32 compute-0 sshd-session[150804]: Failed password for invalid user zabbix from 36.224.53.32 port 46356 ssh2
Oct 09 16:33:32 compute-0 sshd-session[150804]: Connection closed by invalid user zabbix 36.224.53.32 port 46356 [preauth]
Oct 09 16:33:33 compute-0 nova_compute[117331]: 2025-10-09 16:33:33.224 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance e08b0fc8-1e1e-4d97-b613-761b8f6f9674 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 1151, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:33:33 compute-0 nova_compute[117331]: 2025-10-09 16:33:33.225 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:33:33 compute-0 nova_compute[117331]: 2025-10-09 16:33:33.225 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1663MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:33:32 up 42 min,  0 user,  load average: 0.59, 0.63, 0.49\n', 'num_instances': '1', 'num_vm_building': '1', 'num_task_spawning': '1', 'num_os_type_None': '1', 'num_proj_6d67ac3076434a4582e5db1ca7d043ff': '1', 'io_workload': '1'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:33:33 compute-0 nova_compute[117331]: 2025-10-09 16:33:33.243 2 DEBUG nova.compute.manager [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:33:33 compute-0 nova_compute[117331]: 2025-10-09 16:33:33.247 2 DEBUG nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:33:33 compute-0 nova_compute[117331]: 2025-10-09 16:33:33.249 2 INFO nova.virt.libvirt.driver [-] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Instance spawned successfully.
Oct 09 16:33:33 compute-0 nova_compute[117331]: 2025-10-09 16:33:33.249 2 DEBUG nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:33:33 compute-0 nova_compute[117331]: 2025-10-09 16:33:33.300 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:33:33 compute-0 nova_compute[117331]: 2025-10-09 16:33:33.765 2 DEBUG nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:33:33 compute-0 nova_compute[117331]: 2025-10-09 16:33:33.765 2 DEBUG nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:33:33 compute-0 nova_compute[117331]: 2025-10-09 16:33:33.766 2 DEBUG nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:33:33 compute-0 nova_compute[117331]: 2025-10-09 16:33:33.766 2 DEBUG nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:33:33 compute-0 nova_compute[117331]: 2025-10-09 16:33:33.767 2 DEBUG nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:33:33 compute-0 nova_compute[117331]: 2025-10-09 16:33:33.767 2 DEBUG nova.virt.libvirt.driver [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:33:33 compute-0 nova_compute[117331]: 2025-10-09 16:33:33.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:33 compute-0 nova_compute[117331]: 2025-10-09 16:33:33.807 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:33:34 compute-0 nova_compute[117331]: 2025-10-09 16:33:34.057 2 DEBUG nova.compute.manager [req-f7c7ea4f-84ba-40c7-9860-2f730c06442e req-ed04c23e-0d24-4da4-a677-2e18b9bb4457 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received event network-vif-plugged-8bc917d5-4807-4e8c-8941-29822aa14c94 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:33:34 compute-0 nova_compute[117331]: 2025-10-09 16:33:34.057 2 DEBUG oslo_concurrency.lockutils [req-f7c7ea4f-84ba-40c7-9860-2f730c06442e req-ed04c23e-0d24-4da4-a677-2e18b9bb4457 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:34 compute-0 nova_compute[117331]: 2025-10-09 16:33:34.057 2 DEBUG oslo_concurrency.lockutils [req-f7c7ea4f-84ba-40c7-9860-2f730c06442e req-ed04c23e-0d24-4da4-a677-2e18b9bb4457 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:34 compute-0 nova_compute[117331]: 2025-10-09 16:33:34.057 2 DEBUG oslo_concurrency.lockutils [req-f7c7ea4f-84ba-40c7-9860-2f730c06442e req-ed04c23e-0d24-4da4-a677-2e18b9bb4457 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:34 compute-0 nova_compute[117331]: 2025-10-09 16:33:34.057 2 DEBUG nova.compute.manager [req-f7c7ea4f-84ba-40c7-9860-2f730c06442e req-ed04c23e-0d24-4da4-a677-2e18b9bb4457 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] No waiting events found dispatching network-vif-plugged-8bc917d5-4807-4e8c-8941-29822aa14c94 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:33:34 compute-0 nova_compute[117331]: 2025-10-09 16:33:34.058 2 WARNING nova.compute.manager [req-f7c7ea4f-84ba-40c7-9860-2f730c06442e req-ed04c23e-0d24-4da4-a677-2e18b9bb4457 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received unexpected event network-vif-plugged-8bc917d5-4807-4e8c-8941-29822aa14c94 for instance with vm_state building and task_state spawning.
Oct 09 16:33:34 compute-0 nova_compute[117331]: 2025-10-09 16:33:34.276 2 INFO nova.compute.manager [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Took 9.43 seconds to spawn the instance on the hypervisor.
Oct 09 16:33:34 compute-0 nova_compute[117331]: 2025-10-09 16:33:34.276 2 DEBUG nova.compute.manager [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:33:34 compute-0 nova_compute[117331]: 2025-10-09 16:33:34.315 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:33:34 compute-0 nova_compute[117331]: 2025-10-09 16:33:34.316 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.143s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:34 compute-0 nova_compute[117331]: 2025-10-09 16:33:34.810 2 INFO nova.compute.manager [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Took 14.62 seconds to build instance.
Oct 09 16:33:35 compute-0 nova_compute[117331]: 2025-10-09 16:33:35.318 2 DEBUG oslo_concurrency.lockutils [None req-81abd526-1b0a-42e6-a509-19b859194087 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.144s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:35.328 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:35.329 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:35.329 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:37 compute-0 sshd-session[150974]: Invalid user test from 36.224.53.32 port 55188
Oct 09 16:33:37 compute-0 nova_compute[117331]: 2025-10-09 16:33:37.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:37 compute-0 nova_compute[117331]: 2025-10-09 16:33:37.316 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:33:37 compute-0 nova_compute[117331]: 2025-10-09 16:33:37.316 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:33:37 compute-0 nova_compute[117331]: 2025-10-09 16:33:37.316 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:33:37 compute-0 sshd-session[150974]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:33:37 compute-0 sshd-session[150974]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:33:38 compute-0 nova_compute[117331]: 2025-10-09 16:33:38.314 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:38 compute-0 nova_compute[117331]: 2025-10-09 16:33:38.314 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:38 compute-0 nova_compute[117331]: 2025-10-09 16:33:38.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:38 compute-0 nova_compute[117331]: 2025-10-09 16:33:38.820 2 DEBUG nova.compute.manager [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:33:39 compute-0 nova_compute[117331]: 2025-10-09 16:33:39.368 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:39 compute-0 nova_compute[117331]: 2025-10-09 16:33:39.369 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:39 compute-0 nova_compute[117331]: 2025-10-09 16:33:39.375 2 DEBUG nova.virt.hardware [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:33:39 compute-0 nova_compute[117331]: 2025-10-09 16:33:39.375 2 INFO nova.compute.claims [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:33:39 compute-0 podman[150977]: 2025-10-09 16:33:39.845367388 +0000 UTC m=+0.064563772 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, name=ubi9-minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 09 16:33:39 compute-0 podman[150978]: 2025-10-09 16:33:39.879405479 +0000 UTC m=+0.096576819 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:33:39 compute-0 sshd-session[150974]: Failed password for invalid user test from 36.224.53.32 port 55188 ssh2
Oct 09 16:33:40 compute-0 nova_compute[117331]: 2025-10-09 16:33:40.438 2 DEBUG nova.compute.provider_tree [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:33:40 compute-0 sshd-session[150974]: Connection closed by invalid user test 36.224.53.32 port 55188 [preauth]
Oct 09 16:33:41 compute-0 nova_compute[117331]: 2025-10-09 16:33:41.035 2 DEBUG nova.scheduler.client.report [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:33:41 compute-0 nova_compute[117331]: 2025-10-09 16:33:41.628 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.260s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:41 compute-0 nova_compute[117331]: 2025-10-09 16:33:41.630 2 DEBUG nova.compute.manager [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:33:42 compute-0 nova_compute[117331]: 2025-10-09 16:33:42.141 2 DEBUG nova.compute.manager [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:33:42 compute-0 nova_compute[117331]: 2025-10-09 16:33:42.142 2 DEBUG nova.network.neutron [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:33:42 compute-0 nova_compute[117331]: 2025-10-09 16:33:42.142 2 WARNING neutronclient.v2_0.client [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:33:42 compute-0 nova_compute[117331]: 2025-10-09 16:33:42.143 2 WARNING neutronclient.v2_0.client [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:33:42 compute-0 nova_compute[117331]: 2025-10-09 16:33:42.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:42 compute-0 nova_compute[117331]: 2025-10-09 16:33:42.650 2 INFO nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:33:43 compute-0 nova_compute[117331]: 2025-10-09 16:33:43.052 2 DEBUG nova.network.neutron [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Successfully created port: 4b893ed9-8bb2-41b7-9842-b530d7cc9ccd _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:33:43 compute-0 nova_compute[117331]: 2025-10-09 16:33:43.160 2 DEBUG nova.compute.manager [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:33:43 compute-0 sshd-session[151020]: Invalid user dspace from 36.224.53.32 port 35270
Oct 09 16:33:43 compute-0 nova_compute[117331]: 2025-10-09 16:33:43.657 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:33:43 compute-0 nova_compute[117331]: 2025-10-09 16:33:43.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:44 compute-0 sshd-session[151020]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:33:44 compute-0 sshd-session[151020]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.168 2 WARNING nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.168 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Triggering sync for uuid e08b0fc8-1e1e-4d97-b613-761b8f6f9674 _sync_power_states /usr/lib/python3.12/site-packages/nova/compute/manager.py:11020
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.169 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Triggering sync for uuid d2bb4f53-7374-48bc-8bd3-5de9d41372b6 _sync_power_states /usr/lib/python3.12/site-packages/nova/compute/manager.py:11020
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.170 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.171 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.171 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.178 2 DEBUG nova.compute.manager [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.181 2 DEBUG nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.182 2 INFO nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Creating image(s)
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.183 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "/var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.183 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "/var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.185 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "/var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.186 2 DEBUG oslo_utils.imageutils.format_inspector [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.193 2 DEBUG oslo_utils.imageutils.format_inspector [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.196 2 DEBUG oslo_concurrency.processutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.259 2 DEBUG oslo_concurrency.processutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.260 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.262 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.263 2 DEBUG oslo_utils.imageutils.format_inspector [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.269 2 DEBUG oslo_utils.imageutils.format_inspector [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.270 2 DEBUG oslo_concurrency.processutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.331 2 DEBUG nova.network.neutron [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Successfully updated port: 4b893ed9-8bb2-41b7-9842-b530d7cc9ccd _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.356 2 DEBUG oslo_concurrency.processutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.357 2 DEBUG oslo_concurrency.processutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.384 2 DEBUG nova.compute.manager [req-0c62d697-065f-48a8-b559-eb261ef92e64 req-88437917-2b9d-4628-9415-c9786cb17613 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Received event network-changed-4b893ed9-8bb2-41b7-9842-b530d7cc9ccd external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.384 2 DEBUG nova.compute.manager [req-0c62d697-065f-48a8-b559-eb261ef92e64 req-88437917-2b9d-4628-9415-c9786cb17613 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Refreshing instance network info cache due to event network-changed-4b893ed9-8bb2-41b7-9842-b530d7cc9ccd. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.384 2 DEBUG oslo_concurrency.lockutils [req-0c62d697-065f-48a8-b559-eb261ef92e64 req-88437917-2b9d-4628-9415-c9786cb17613 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-d2bb4f53-7374-48bc-8bd3-5de9d41372b6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.385 2 DEBUG oslo_concurrency.lockutils [req-0c62d697-065f-48a8-b559-eb261ef92e64 req-88437917-2b9d-4628-9415-c9786cb17613 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-d2bb4f53-7374-48bc-8bd3-5de9d41372b6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.385 2 DEBUG nova.network.neutron [req-0c62d697-065f-48a8-b559-eb261ef92e64 req-88437917-2b9d-4628-9415-c9786cb17613 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Refreshing network info cache for port 4b893ed9-8bb2-41b7-9842-b530d7cc9ccd _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.386 2 DEBUG oslo_concurrency.processutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk 1073741824" returned: 0 in 0.029s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.387 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.125s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.387 2 DEBUG oslo_concurrency.processutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.432 2 DEBUG oslo_concurrency.processutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.433 2 DEBUG nova.virt.disk.api [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Checking if we can resize image /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.433 2 DEBUG oslo_concurrency.processutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.480 2 DEBUG oslo_concurrency.processutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.481 2 DEBUG nova.virt.disk.api [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Cannot resize image /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.481 2 DEBUG nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.481 2 DEBUG nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Ensure instance console log exists: /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.482 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.482 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.482 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.710 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.539s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.841 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "refresh_cache-d2bb4f53-7374-48bc-8bd3-5de9d41372b6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:33:44 compute-0 nova_compute[117331]: 2025-10-09 16:33:44.895 2 WARNING neutronclient.v2_0.client [req-0c62d697-065f-48a8-b559-eb261ef92e64 req-88437917-2b9d-4628-9415-c9786cb17613 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:33:45 compute-0 nova_compute[117331]: 2025-10-09 16:33:45.245 2 DEBUG nova.network.neutron [req-0c62d697-065f-48a8-b559-eb261ef92e64 req-88437917-2b9d-4628-9415-c9786cb17613 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:33:45 compute-0 ovn_controller[19752]: 2025-10-09T16:33:45Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a0:cf:4a 10.100.0.4
Oct 09 16:33:45 compute-0 ovn_controller[19752]: 2025-10-09T16:33:45Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a0:cf:4a 10.100.0.4
Oct 09 16:33:45 compute-0 nova_compute[117331]: 2025-10-09 16:33:45.386 2 DEBUG nova.network.neutron [req-0c62d697-065f-48a8-b559-eb261ef92e64 req-88437917-2b9d-4628-9415-c9786cb17613 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:33:45 compute-0 nova_compute[117331]: 2025-10-09 16:33:45.911 2 DEBUG oslo_concurrency.lockutils [req-0c62d697-065f-48a8-b559-eb261ef92e64 req-88437917-2b9d-4628-9415-c9786cb17613 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-d2bb4f53-7374-48bc-8bd3-5de9d41372b6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:33:45 compute-0 nova_compute[117331]: 2025-10-09 16:33:45.912 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquired lock "refresh_cache-d2bb4f53-7374-48bc-8bd3-5de9d41372b6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:33:45 compute-0 nova_compute[117331]: 2025-10-09 16:33:45.912 2 DEBUG nova.network.neutron [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:33:46 compute-0 sshd-session[151020]: Failed password for invalid user dspace from 36.224.53.32 port 35270 ssh2
Oct 09 16:33:46 compute-0 sshd-session[151020]: Connection closed by invalid user dspace 36.224.53.32 port 35270 [preauth]
Oct 09 16:33:47 compute-0 nova_compute[117331]: 2025-10-09 16:33:47.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:47 compute-0 nova_compute[117331]: 2025-10-09 16:33:47.233 2 DEBUG nova.network.neutron [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:33:47 compute-0 nova_compute[117331]: 2025-10-09 16:33:47.472 2 WARNING neutronclient.v2_0.client [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.237 2 DEBUG nova.network.neutron [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Updating instance_info_cache with network_info: [{"id": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "address": "fa:16:3e:bc:f1:50", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b893ed9-8b", "ovs_interfaceid": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.746 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Releasing lock "refresh_cache-d2bb4f53-7374-48bc-8bd3-5de9d41372b6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.747 2 DEBUG nova.compute.manager [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Instance network_info: |[{"id": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "address": "fa:16:3e:bc:f1:50", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b893ed9-8b", "ovs_interfaceid": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.749 2 DEBUG nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Start _get_guest_xml network_info=[{"id": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "address": "fa:16:3e:bc:f1:50", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b893ed9-8b", "ovs_interfaceid": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.754 2 WARNING nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.755 2 DEBUG nova.virt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteWorkloadBalanceStrategy-server-1819620832', uuid='d2bb4f53-7374-48bc-8bd3-5de9d41372b6'), owner=OwnerMeta(userid='1c793380a6e945d69dacfd07f1f156f8', username='tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin', projectid='6d67ac3076434a4582e5db1ca7d043ff', projectname='tempest-TestExecuteWorkloadBalanceStrategy-2100042169'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='tempest-watcher_flavor-1801535589', flavorid='91ea4b87-742a-4f2b-9b5d-34de6e13f85a', memory_mb=1151, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={}, swap=0), network_info=[{"id": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "address": "fa:16:3e:bc:f1:50", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b893ed9-8b", "ovs_interfaceid": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760027628.7557726) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.760 2 DEBUG nova.virt.libvirt.host [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.760 2 DEBUG nova.virt.libvirt.host [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.763 2 DEBUG nova.virt.libvirt.host [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.763 2 DEBUG nova.virt.libvirt.host [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.764 2 DEBUG nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.764 2 DEBUG nova.virt.hardware [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:33:16Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={},flavorid='91ea4b87-742a-4f2b-9b5d-34de6e13f85a',id=3,is_public=True,memory_mb=1151,name='tempest-watcher_flavor-1801535589',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.765 2 DEBUG nova.virt.hardware [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.765 2 DEBUG nova.virt.hardware [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.765 2 DEBUG nova.virt.hardware [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.765 2 DEBUG nova.virt.hardware [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.765 2 DEBUG nova.virt.hardware [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.766 2 DEBUG nova.virt.hardware [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.766 2 DEBUG nova.virt.hardware [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.766 2 DEBUG nova.virt.hardware [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.766 2 DEBUG nova.virt.hardware [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.766 2 DEBUG nova.virt.hardware [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.770 2 DEBUG nova.virt.libvirt.vif [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:33:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-1819620832',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-1819620832',id=25,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=1151,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d67ac3076434a4582e5db1ca7d043ff',ramdisk_id='',reservation_id='r-bt02xp9l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:33:43Z,user_data=None,user_id='1c793380a6e945d69dacfd07f1f156f8',uuid=d2bb4f53-7374-48bc-8bd3-5de9d41372b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "address": "fa:16:3e:bc:f1:50", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b893ed9-8b", "ovs_interfaceid": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.771 2 DEBUG nova.network.os_vif_util [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converting VIF {"id": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "address": "fa:16:3e:bc:f1:50", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b893ed9-8b", "ovs_interfaceid": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.771 2 DEBUG nova.network.os_vif_util [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:f1:50,bridge_name='br-int',has_traffic_filtering=True,id=4b893ed9-8bb2-41b7-9842-b530d7cc9ccd,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b893ed9-8b') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.772 2 DEBUG nova.objects.instance [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lazy-loading 'pci_devices' on Instance uuid d2bb4f53-7374-48bc-8bd3-5de9d41372b6 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:33:48 compute-0 nova_compute[117331]: 2025-10-09 16:33:48.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.283 2 DEBUG nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:33:49 compute-0 nova_compute[117331]:   <uuid>d2bb4f53-7374-48bc-8bd3-5de9d41372b6</uuid>
Oct 09 16:33:49 compute-0 nova_compute[117331]:   <name>instance-00000019</name>
Oct 09 16:33:49 compute-0 nova_compute[117331]:   <memory>1178624</memory>
Oct 09 16:33:49 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:33:49 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-1819620832</nova:name>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:33:48</nova:creationTime>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <nova:flavor name="tempest-watcher_flavor-1801535589" id="91ea4b87-742a-4f2b-9b5d-34de6e13f85a">
Oct 09 16:33:49 compute-0 nova_compute[117331]:         <nova:memory>1151</nova:memory>
Oct 09 16:33:49 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:33:49 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:33:49 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:33:49 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:33:49 compute-0 nova_compute[117331]:         <nova:extraSpecs/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:33:49 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:33:49 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:33:49 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:33:49 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:33:49 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:33:49 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:33:49 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:33:49 compute-0 nova_compute[117331]:         <nova:user uuid="1c793380a6e945d69dacfd07f1f156f8">tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin</nova:user>
Oct 09 16:33:49 compute-0 nova_compute[117331]:         <nova:project uuid="6d67ac3076434a4582e5db1ca7d043ff">tempest-TestExecuteWorkloadBalanceStrategy-2100042169</nova:project>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:33:49 compute-0 nova_compute[117331]:         <nova:port uuid="4b893ed9-8bb2-41b7-9842-b530d7cc9ccd">
Oct 09 16:33:49 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:33:49 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:33:49 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <system>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <entry name="serial">d2bb4f53-7374-48bc-8bd3-5de9d41372b6</entry>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <entry name="uuid">d2bb4f53-7374-48bc-8bd3-5de9d41372b6</entry>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     </system>
Oct 09 16:33:49 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:33:49 compute-0 nova_compute[117331]:   <os>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:   </os>
Oct 09 16:33:49 compute-0 nova_compute[117331]:   <features>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:   </features>
Oct 09 16:33:49 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:33:49 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:33:49 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk.config"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:bc:f1:50"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <target dev="tap4b893ed9-8b"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/console.log" append="off"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <video>
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     </video>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:33:49 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:33:49 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:33:49 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:33:49 compute-0 nova_compute[117331]: </domain>
Oct 09 16:33:49 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.286 2 DEBUG nova.compute.manager [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Preparing to wait for external event network-vif-plugged-4b893ed9-8bb2-41b7-9842-b530d7cc9ccd prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.286 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.287 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.287 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.288 2 DEBUG nova.virt.libvirt.vif [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:33:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-1819620832',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-1819620832',id=25,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=1151,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d67ac3076434a4582e5db1ca7d043ff',ramdisk_id='',reservation_id='r-bt02xp9l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:33:43Z,user_data=None,user_id='1c793380a6e945d69dacfd07f1f156f8',uuid=d2bb4f53-7374-48bc-8bd3-5de9d41372b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "address": "fa:16:3e:bc:f1:50", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b893ed9-8b", "ovs_interfaceid": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.288 2 DEBUG nova.network.os_vif_util [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converting VIF {"id": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "address": "fa:16:3e:bc:f1:50", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b893ed9-8b", "ovs_interfaceid": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.289 2 DEBUG nova.network.os_vif_util [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:f1:50,bridge_name='br-int',has_traffic_filtering=True,id=4b893ed9-8bb2-41b7-9842-b530d7cc9ccd,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b893ed9-8b') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.290 2 DEBUG os_vif [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:f1:50,bridge_name='br-int',has_traffic_filtering=True,id=4b893ed9-8bb2-41b7-9842-b530d7cc9ccd,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b893ed9-8b') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.291 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.292 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.293 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'db4afb10-2fee-5282-a813-0e5977633d36', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.301 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b893ed9-8b, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.301 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap4b893ed9-8b, col_values=(('qos', UUID('e1cc9b99-c1a9-42a7-9b32-f899b05a38ad')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.301 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap4b893ed9-8b, col_values=(('external_ids', {'iface-id': '4b893ed9-8bb2-41b7-9842-b530d7cc9ccd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:f1:50', 'vm-uuid': 'd2bb4f53-7374-48bc-8bd3-5de9d41372b6'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:49 compute-0 NetworkManager[1028]: <info>  [1760027629.3039] manager: (tap4b893ed9-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:49 compute-0 nova_compute[117331]: 2025-10-09 16:33:49.311 2 INFO os_vif [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:f1:50,bridge_name='br-int',has_traffic_filtering=True,id=4b893ed9-8bb2-41b7-9842-b530d7cc9ccd,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b893ed9-8b')
Oct 09 16:33:50 compute-0 nova_compute[117331]: 2025-10-09 16:33:50.852 2 DEBUG nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:33:50 compute-0 nova_compute[117331]: 2025-10-09 16:33:50.853 2 DEBUG nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:33:50 compute-0 nova_compute[117331]: 2025-10-09 16:33:50.854 2 DEBUG nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] No VIF found with MAC fa:16:3e:bc:f1:50, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:33:50 compute-0 nova_compute[117331]: 2025-10-09 16:33:50.854 2 INFO nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Using config drive
Oct 09 16:33:51 compute-0 nova_compute[117331]: 2025-10-09 16:33:51.365 2 WARNING neutronclient.v2_0.client [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:33:51 compute-0 sshd-session[151055]: Invalid user gitlab from 36.224.53.32 port 43598
Oct 09 16:33:51 compute-0 podman[151060]: 2025-10-09 16:33:51.84827983 +0000 UTC m=+0.090576576 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 09 16:33:51 compute-0 nova_compute[117331]: 2025-10-09 16:33:51.959 2 INFO nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Creating config drive at /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk.config
Oct 09 16:33:51 compute-0 nova_compute[117331]: 2025-10-09 16:33:51.967 2 DEBUG oslo_concurrency.processutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmptgax0p9p execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:33:52 compute-0 nova_compute[117331]: 2025-10-09 16:33:52.108 2 DEBUG oslo_concurrency.processutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmptgax0p9p" returned: 0 in 0.141s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:33:52 compute-0 kernel: tap4b893ed9-8b: entered promiscuous mode
Oct 09 16:33:52 compute-0 NetworkManager[1028]: <info>  [1760027632.1623] manager: (tap4b893ed9-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/86)
Oct 09 16:33:52 compute-0 nova_compute[117331]: 2025-10-09 16:33:52.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:52 compute-0 ovn_controller[19752]: 2025-10-09T16:33:52Z|00224|binding|INFO|Claiming lport 4b893ed9-8bb2-41b7-9842-b530d7cc9ccd for this chassis.
Oct 09 16:33:52 compute-0 ovn_controller[19752]: 2025-10-09T16:33:52Z|00225|binding|INFO|4b893ed9-8bb2-41b7-9842-b530d7cc9ccd: Claiming fa:16:3e:bc:f1:50 10.100.0.7
Oct 09 16:33:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:52.172 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:f1:50 10.100.0.7'], port_security=['fa:16:3e:bc:f1:50 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd2bb4f53-7374-48bc-8bd3-5de9d41372b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54b37568-476a-40a0-b545-fe5401f85653', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d67ac3076434a4582e5db1ca7d043ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '94abd997-3903-47b3-abe3-e283a4232c96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc641581-8c4b-4cef-982e-bb0cf11a52ba, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=4b893ed9-8bb2-41b7-9842-b530d7cc9ccd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:33:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:52.173 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 4b893ed9-8bb2-41b7-9842-b530d7cc9ccd in datapath 54b37568-476a-40a0-b545-fe5401f85653 bound to our chassis
Oct 09 16:33:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:52.174 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 54b37568-476a-40a0-b545-fe5401f85653
Oct 09 16:33:52 compute-0 ovn_controller[19752]: 2025-10-09T16:33:52Z|00226|binding|INFO|Setting lport 4b893ed9-8bb2-41b7-9842-b530d7cc9ccd ovn-installed in OVS
Oct 09 16:33:52 compute-0 ovn_controller[19752]: 2025-10-09T16:33:52Z|00227|binding|INFO|Setting lport 4b893ed9-8bb2-41b7-9842-b530d7cc9ccd up in Southbound
Oct 09 16:33:52 compute-0 nova_compute[117331]: 2025-10-09 16:33:52.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:52 compute-0 nova_compute[117331]: 2025-10-09 16:33:52.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:52.190 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[2e26d2b8-2070-49fc-950d-b02f3fcf83de]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:52 compute-0 systemd-udevd[151098]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:33:52 compute-0 systemd-machined[77487]: New machine qemu-19-instance-00000019.
Oct 09 16:33:52 compute-0 NetworkManager[1028]: <info>  [1760027632.2042] device (tap4b893ed9-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:33:52 compute-0 NetworkManager[1028]: <info>  [1760027632.2051] device (tap4b893ed9-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:33:52 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000019.
Oct 09 16:33:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:52.217 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[e8943af5-ac95-4d4c-9213-355e5723c6c1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:52.219 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[5be7767d-2159-4832-ba49-fdd4098acd29]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:52.246 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[c5849795-137a-45bb-a456-8eb16a9d8375]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:52.260 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[04efdca4-8ca2-46a8-a7c8-53e4c5c1e4c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54b37568-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:3a:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 255534, 'reachable_time': 20185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 151108, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:52.274 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[2b318f18-64d5-4d90-8777-3347e86bbcba]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap54b37568-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 255546, 'tstamp': 255546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 151112, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap54b37568-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 255550, 'tstamp': 255550}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 151112, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:52.275 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54b37568-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:52 compute-0 nova_compute[117331]: 2025-10-09 16:33:52.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:52.278 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54b37568-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:52 compute-0 nova_compute[117331]: 2025-10-09 16:33:52.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:52.278 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:33:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:52.279 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap54b37568-40, col_values=(('external_ids', {'iface-id': '01dee844-02ac-4caa-80f7-8a16019cbd9d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:33:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:52.279 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:33:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:33:52.280 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d322665b-2904-4466-bd53-7ba7783b4789]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-54b37568-476a-40a0-b545-fe5401f85653\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 54b37568-476a-40a0-b545-fe5401f85653\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:33:52 compute-0 nova_compute[117331]: 2025-10-09 16:33:52.378 2 DEBUG nova.compute.manager [req-c2900272-d022-4785-a4f4-c04280f8e46c req-8aedbe2e-c071-4d70-b2b9-ff3a95166096 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Received event network-vif-plugged-4b893ed9-8bb2-41b7-9842-b530d7cc9ccd external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:33:52 compute-0 nova_compute[117331]: 2025-10-09 16:33:52.378 2 DEBUG oslo_concurrency.lockutils [req-c2900272-d022-4785-a4f4-c04280f8e46c req-8aedbe2e-c071-4d70-b2b9-ff3a95166096 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:52 compute-0 nova_compute[117331]: 2025-10-09 16:33:52.379 2 DEBUG oslo_concurrency.lockutils [req-c2900272-d022-4785-a4f4-c04280f8e46c req-8aedbe2e-c071-4d70-b2b9-ff3a95166096 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:52 compute-0 nova_compute[117331]: 2025-10-09 16:33:52.379 2 DEBUG oslo_concurrency.lockutils [req-c2900272-d022-4785-a4f4-c04280f8e46c req-8aedbe2e-c071-4d70-b2b9-ff3a95166096 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:52 compute-0 nova_compute[117331]: 2025-10-09 16:33:52.379 2 DEBUG nova.compute.manager [req-c2900272-d022-4785-a4f4-c04280f8e46c req-8aedbe2e-c071-4d70-b2b9-ff3a95166096 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Processing event network-vif-plugged-4b893ed9-8bb2-41b7-9842-b530d7cc9ccd _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:33:52 compute-0 sshd-session[151055]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:33:52 compute-0 sshd-session[151055]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:33:53 compute-0 nova_compute[117331]: 2025-10-09 16:33:53.043 2 DEBUG nova.compute.manager [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:33:53 compute-0 nova_compute[117331]: 2025-10-09 16:33:53.046 2 DEBUG nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:33:53 compute-0 nova_compute[117331]: 2025-10-09 16:33:53.049 2 INFO nova.virt.libvirt.driver [-] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Instance spawned successfully.
Oct 09 16:33:53 compute-0 nova_compute[117331]: 2025-10-09 16:33:53.049 2 DEBUG nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:33:53 compute-0 nova_compute[117331]: 2025-10-09 16:33:53.562 2 DEBUG nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:33:53 compute-0 nova_compute[117331]: 2025-10-09 16:33:53.563 2 DEBUG nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:33:53 compute-0 nova_compute[117331]: 2025-10-09 16:33:53.563 2 DEBUG nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:33:53 compute-0 nova_compute[117331]: 2025-10-09 16:33:53.564 2 DEBUG nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:33:53 compute-0 nova_compute[117331]: 2025-10-09 16:33:53.564 2 DEBUG nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:33:53 compute-0 nova_compute[117331]: 2025-10-09 16:33:53.565 2 DEBUG nova.virt.libvirt.driver [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:33:53 compute-0 sshd-session[151055]: Failed password for invalid user gitlab from 36.224.53.32 port 43598 ssh2
Oct 09 16:33:54 compute-0 nova_compute[117331]: 2025-10-09 16:33:54.077 2 INFO nova.compute.manager [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Took 9.90 seconds to spawn the instance on the hypervisor.
Oct 09 16:33:54 compute-0 nova_compute[117331]: 2025-10-09 16:33:54.078 2 DEBUG nova.compute.manager [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:33:54 compute-0 sshd-session[151055]: Connection closed by invalid user gitlab 36.224.53.32 port 43598 [preauth]
Oct 09 16:33:54 compute-0 nova_compute[117331]: 2025-10-09 16:33:54.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:54 compute-0 nova_compute[117331]: 2025-10-09 16:33:54.435 2 DEBUG nova.compute.manager [req-acb7c29d-120f-4b57-97a3-988f9cba6504 req-207153db-74bd-4c62-b945-e5d753d89d5c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Received event network-vif-plugged-4b893ed9-8bb2-41b7-9842-b530d7cc9ccd external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:33:54 compute-0 nova_compute[117331]: 2025-10-09 16:33:54.435 2 DEBUG oslo_concurrency.lockutils [req-acb7c29d-120f-4b57-97a3-988f9cba6504 req-207153db-74bd-4c62-b945-e5d753d89d5c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:33:54 compute-0 nova_compute[117331]: 2025-10-09 16:33:54.436 2 DEBUG oslo_concurrency.lockutils [req-acb7c29d-120f-4b57-97a3-988f9cba6504 req-207153db-74bd-4c62-b945-e5d753d89d5c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:54 compute-0 nova_compute[117331]: 2025-10-09 16:33:54.436 2 DEBUG oslo_concurrency.lockutils [req-acb7c29d-120f-4b57-97a3-988f9cba6504 req-207153db-74bd-4c62-b945-e5d753d89d5c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:54 compute-0 nova_compute[117331]: 2025-10-09 16:33:54.436 2 DEBUG nova.compute.manager [req-acb7c29d-120f-4b57-97a3-988f9cba6504 req-207153db-74bd-4c62-b945-e5d753d89d5c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] No waiting events found dispatching network-vif-plugged-4b893ed9-8bb2-41b7-9842-b530d7cc9ccd pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:33:54 compute-0 nova_compute[117331]: 2025-10-09 16:33:54.436 2 WARNING nova.compute.manager [req-acb7c29d-120f-4b57-97a3-988f9cba6504 req-207153db-74bd-4c62-b945-e5d753d89d5c ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Received unexpected event network-vif-plugged-4b893ed9-8bb2-41b7-9842-b530d7cc9ccd for instance with vm_state active and task_state None.
Oct 09 16:33:54 compute-0 nova_compute[117331]: 2025-10-09 16:33:54.609 2 INFO nova.compute.manager [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Took 15.28 seconds to build instance.
Oct 09 16:33:55 compute-0 nova_compute[117331]: 2025-10-09 16:33:55.114 2 DEBUG oslo_concurrency.lockutils [None req-7ad2ce8a-2ab7-48f4-923c-a933cdddadfb 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.800s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:55 compute-0 nova_compute[117331]: 2025-10-09 16:33:55.114 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 10.943s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:33:55 compute-0 nova_compute[117331]: 2025-10-09 16:33:55.115 2 INFO nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] During sync_power_state the instance has a pending task (block_device_mapping). Skip.
Oct 09 16:33:55 compute-0 nova_compute[117331]: 2025-10-09 16:33:55.115 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:33:55 compute-0 podman[151123]: 2025-10-09 16:33:55.853678144 +0000 UTC m=+0.076178990 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 09 16:33:57 compute-0 nova_compute[117331]: 2025-10-09 16:33:57.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:58 compute-0 sshd-session[151121]: Invalid user elastic from 36.224.53.32 port 51412
Oct 09 16:33:59 compute-0 nova_compute[117331]: 2025-10-09 16:33:59.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:33:59 compute-0 sshd-session[151121]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:33:59 compute-0 sshd-session[151121]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:33:59 compute-0 podman[127775]: time="2025-10-09T16:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:33:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:33:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3496 "" "Go-http-client/1.1"
Oct 09 16:34:00 compute-0 sshd-session[151121]: Failed password for invalid user elastic from 36.224.53.32 port 51412 ssh2
Oct 09 16:34:01 compute-0 openstack_network_exporter[129925]: ERROR   16:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:34:01 compute-0 openstack_network_exporter[129925]: ERROR   16:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:34:01 compute-0 openstack_network_exporter[129925]: ERROR   16:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:34:01 compute-0 openstack_network_exporter[129925]: ERROR   16:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:34:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:34:01 compute-0 openstack_network_exporter[129925]: ERROR   16:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:34:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:34:01 compute-0 sshd-session[151121]: Connection closed by invalid user elastic 36.224.53.32 port 51412 [preauth]
Oct 09 16:34:01 compute-0 podman[151149]: 2025-10-09 16:34:01.817327219 +0000 UTC m=+0.044310169 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:34:01 compute-0 podman[151150]: 2025-10-09 16:34:01.844246983 +0000 UTC m=+0.067962629 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_managed=true, container_name=iscsid)
Oct 09 16:34:02 compute-0 nova_compute[117331]: 2025-10-09 16:34:02.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:04 compute-0 nova_compute[117331]: 2025-10-09 16:34:04.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:05 compute-0 ovn_controller[19752]: 2025-10-09T16:34:05Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bc:f1:50 10.100.0.7
Oct 09 16:34:05 compute-0 ovn_controller[19752]: 2025-10-09T16:34:05Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bc:f1:50 10.100.0.7
Oct 09 16:34:07 compute-0 nova_compute[117331]: 2025-10-09 16:34:07.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:07 compute-0 nova_compute[117331]: 2025-10-09 16:34:07.356 2 DEBUG nova.virt.libvirt.driver [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Check if temp file /var/lib/nova/instances/tmpxjacdr_s exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Oct 09 16:34:07 compute-0 nova_compute[117331]: 2025-10-09 16:34:07.361 2 DEBUG nova.compute.manager [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxjacdr_s',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='e08b0fc8-1e1e-4d97-b613-761b8f6f9674',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Oct 09 16:34:08 compute-0 sshd-session[151147]: Connection closed by 36.224.53.32 port 60222 [preauth]
Oct 09 16:34:09 compute-0 nova_compute[117331]: 2025-10-09 16:34:09.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:10 compute-0 podman[151205]: 2025-10-09 16:34:10.845639497 +0000 UTC m=+0.065078957 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_id=edpm, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public)
Oct 09 16:34:10 compute-0 podman[151206]: 2025-10-09 16:34:10.905118555 +0000 UTC m=+0.121970793 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:34:12 compute-0 sshd-session[151203]: Invalid user git from 36.224.53.32 port 40658
Oct 09 16:34:12 compute-0 nova_compute[117331]: 2025-10-09 16:34:12.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:12 compute-0 nova_compute[117331]: 2025-10-09 16:34:12.313 2 DEBUG oslo_concurrency.processutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:34:12 compute-0 nova_compute[117331]: 2025-10-09 16:34:12.367 2 DEBUG oslo_concurrency.processutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:34:12 compute-0 nova_compute[117331]: 2025-10-09 16:34:12.368 2 DEBUG oslo_concurrency.processutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:34:12 compute-0 nova_compute[117331]: 2025-10-09 16:34:12.421 2 DEBUG oslo_concurrency.processutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:34:12 compute-0 nova_compute[117331]: 2025-10-09 16:34:12.422 2 DEBUG nova.compute.manager [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Preparing to wait for external event network-vif-plugged-8bc917d5-4807-4e8c-8941-29822aa14c94 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:34:12 compute-0 nova_compute[117331]: 2025-10-09 16:34:12.423 2 DEBUG oslo_concurrency.lockutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:12 compute-0 nova_compute[117331]: 2025-10-09 16:34:12.423 2 DEBUG oslo_concurrency.lockutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:12 compute-0 nova_compute[117331]: 2025-10-09 16:34:12.423 2 DEBUG oslo_concurrency.lockutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:14 compute-0 nova_compute[117331]: 2025-10-09 16:34:14.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:15 compute-0 sshd-session[151203]: Connection closed by invalid user git 36.224.53.32 port 40658 [preauth]
Oct 09 16:34:17 compute-0 nova_compute[117331]: 2025-10-09 16:34:17.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:17 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:17.444 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:34:17 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:17.445 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:34:17 compute-0 nova_compute[117331]: 2025-10-09 16:34:17.462 2 DEBUG nova.compute.manager [req-ceb3eb92-8e19-4851-8e7c-b55b7b48fceb req-9e5e1c46-e28a-4bf5-a09e-03e873610e53 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received event network-vif-unplugged-8bc917d5-4807-4e8c-8941-29822aa14c94 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:34:17 compute-0 nova_compute[117331]: 2025-10-09 16:34:17.462 2 DEBUG oslo_concurrency.lockutils [req-ceb3eb92-8e19-4851-8e7c-b55b7b48fceb req-9e5e1c46-e28a-4bf5-a09e-03e873610e53 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:17 compute-0 nova_compute[117331]: 2025-10-09 16:34:17.462 2 DEBUG oslo_concurrency.lockutils [req-ceb3eb92-8e19-4851-8e7c-b55b7b48fceb req-9e5e1c46-e28a-4bf5-a09e-03e873610e53 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:17 compute-0 nova_compute[117331]: 2025-10-09 16:34:17.463 2 DEBUG oslo_concurrency.lockutils [req-ceb3eb92-8e19-4851-8e7c-b55b7b48fceb req-9e5e1c46-e28a-4bf5-a09e-03e873610e53 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:17 compute-0 nova_compute[117331]: 2025-10-09 16:34:17.463 2 DEBUG nova.compute.manager [req-ceb3eb92-8e19-4851-8e7c-b55b7b48fceb req-9e5e1c46-e28a-4bf5-a09e-03e873610e53 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] No event matching network-vif-unplugged-8bc917d5-4807-4e8c-8941-29822aa14c94 in dict_keys([('network-vif-plugged', '8bc917d5-4807-4e8c-8941-29822aa14c94')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Oct 09 16:34:17 compute-0 nova_compute[117331]: 2025-10-09 16:34:17.463 2 DEBUG nova.compute.manager [req-ceb3eb92-8e19-4851-8e7c-b55b7b48fceb req-9e5e1c46-e28a-4bf5-a09e-03e873610e53 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received event network-vif-unplugged-8bc917d5-4807-4e8c-8941-29822aa14c94 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:34:17 compute-0 nova_compute[117331]: 2025-10-09 16:34:17.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:18 compute-0 nova_compute[117331]: 2025-10-09 16:34:18.941 2 INFO nova.compute.manager [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Took 6.52 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Oct 09 16:34:19 compute-0 sshd-session[151258]: Invalid user mapred from 36.224.53.32 port 48768
Oct 09 16:34:19 compute-0 nova_compute[117331]: 2025-10-09 16:34:19.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:19 compute-0 nova_compute[117331]: 2025-10-09 16:34:19.517 2 DEBUG nova.compute.manager [req-2d385307-45a0-42a1-a404-3e0707f7be07 req-c6161ab1-c8e7-4897-8b87-ef3dc674c09b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received event network-vif-plugged-8bc917d5-4807-4e8c-8941-29822aa14c94 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:34:19 compute-0 nova_compute[117331]: 2025-10-09 16:34:19.517 2 DEBUG oslo_concurrency.lockutils [req-2d385307-45a0-42a1-a404-3e0707f7be07 req-c6161ab1-c8e7-4897-8b87-ef3dc674c09b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:19 compute-0 nova_compute[117331]: 2025-10-09 16:34:19.518 2 DEBUG oslo_concurrency.lockutils [req-2d385307-45a0-42a1-a404-3e0707f7be07 req-c6161ab1-c8e7-4897-8b87-ef3dc674c09b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:19 compute-0 nova_compute[117331]: 2025-10-09 16:34:19.518 2 DEBUG oslo_concurrency.lockutils [req-2d385307-45a0-42a1-a404-3e0707f7be07 req-c6161ab1-c8e7-4897-8b87-ef3dc674c09b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:19 compute-0 nova_compute[117331]: 2025-10-09 16:34:19.518 2 DEBUG nova.compute.manager [req-2d385307-45a0-42a1-a404-3e0707f7be07 req-c6161ab1-c8e7-4897-8b87-ef3dc674c09b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Processing event network-vif-plugged-8bc917d5-4807-4e8c-8941-29822aa14c94 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:34:19 compute-0 nova_compute[117331]: 2025-10-09 16:34:19.518 2 DEBUG nova.compute.manager [req-2d385307-45a0-42a1-a404-3e0707f7be07 req-c6161ab1-c8e7-4897-8b87-ef3dc674c09b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received event network-changed-8bc917d5-4807-4e8c-8941-29822aa14c94 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:34:19 compute-0 nova_compute[117331]: 2025-10-09 16:34:19.519 2 DEBUG nova.compute.manager [req-2d385307-45a0-42a1-a404-3e0707f7be07 req-c6161ab1-c8e7-4897-8b87-ef3dc674c09b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Refreshing instance network info cache due to event network-changed-8bc917d5-4807-4e8c-8941-29822aa14c94. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:34:19 compute-0 nova_compute[117331]: 2025-10-09 16:34:19.519 2 DEBUG oslo_concurrency.lockutils [req-2d385307-45a0-42a1-a404-3e0707f7be07 req-c6161ab1-c8e7-4897-8b87-ef3dc674c09b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-e08b0fc8-1e1e-4d97-b613-761b8f6f9674" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:34:19 compute-0 nova_compute[117331]: 2025-10-09 16:34:19.519 2 DEBUG oslo_concurrency.lockutils [req-2d385307-45a0-42a1-a404-3e0707f7be07 req-c6161ab1-c8e7-4897-8b87-ef3dc674c09b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-e08b0fc8-1e1e-4d97-b613-761b8f6f9674" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:34:19 compute-0 nova_compute[117331]: 2025-10-09 16:34:19.520 2 DEBUG nova.network.neutron [req-2d385307-45a0-42a1-a404-3e0707f7be07 req-c6161ab1-c8e7-4897-8b87-ef3dc674c09b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Refreshing network info cache for port 8bc917d5-4807-4e8c-8941-29822aa14c94 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:34:19 compute-0 nova_compute[117331]: 2025-10-09 16:34:19.521 2 DEBUG nova.compute.manager [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:34:20 compute-0 nova_compute[117331]: 2025-10-09 16:34:20.027 2 WARNING neutronclient.v2_0.client [req-2d385307-45a0-42a1-a404-3e0707f7be07 req-c6161ab1-c8e7-4897-8b87-ef3dc674c09b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:34:20 compute-0 sshd-session[151258]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:34:20 compute-0 sshd-session[151258]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:34:20 compute-0 nova_compute[117331]: 2025-10-09 16:34:20.036 2 DEBUG nova.compute.manager [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxjacdr_s',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='e08b0fc8-1e1e-4d97-b613-761b8f6f9674',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(eaf79dcb-78b9-44ec-8f72-8a19ead6b011),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Oct 09 16:34:20 compute-0 nova_compute[117331]: 2025-10-09 16:34:20.555 2 DEBUG nova.objects.instance [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'migration_context' on Instance uuid e08b0fc8-1e1e-4d97-b613-761b8f6f9674 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:34:20 compute-0 nova_compute[117331]: 2025-10-09 16:34:20.556 2 DEBUG nova.virt.libvirt.driver [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Oct 09 16:34:20 compute-0 nova_compute[117331]: 2025-10-09 16:34:20.557 2 DEBUG nova.virt.libvirt.driver [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:34:20 compute-0 nova_compute[117331]: 2025-10-09 16:34:20.558 2 DEBUG nova.virt.libvirt.driver [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:34:20 compute-0 nova_compute[117331]: 2025-10-09 16:34:20.571 2 WARNING neutronclient.v2_0.client [req-2d385307-45a0-42a1-a404-3e0707f7be07 req-c6161ab1-c8e7-4897-8b87-ef3dc674c09b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:34:21 compute-0 nova_compute[117331]: 2025-10-09 16:34:21.059 2 DEBUG nova.virt.libvirt.driver [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:34:21 compute-0 nova_compute[117331]: 2025-10-09 16:34:21.060 2 DEBUG nova.virt.libvirt.driver [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:34:21 compute-0 nova_compute[117331]: 2025-10-09 16:34:21.068 2 DEBUG nova.virt.libvirt.vif [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:33:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-487819226',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-487819226',id=24,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:33:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=1151,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6d67ac3076434a4582e5db1ca7d043ff',ramdisk_id='',reservation_id='r-2km05070',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:33:34Z,user_data=None,user_id='1c793380a6e945d69dacfd07f1f156f8',uuid=e08b0fc8-1e1e-4d97-b613-761b8f6f9674,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8bc917d5-4807-4e8c-8941-29822aa14c94", "address": "fa:16:3e:a0:cf:4a", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap8bc917d5-48", "ovs_interfaceid": "8bc917d5-4807-4e8c-8941-29822aa14c94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:34:21 compute-0 nova_compute[117331]: 2025-10-09 16:34:21.069 2 DEBUG nova.network.os_vif_util [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "8bc917d5-4807-4e8c-8941-29822aa14c94", "address": "fa:16:3e:a0:cf:4a", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap8bc917d5-48", "ovs_interfaceid": "8bc917d5-4807-4e8c-8941-29822aa14c94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:34:21 compute-0 nova_compute[117331]: 2025-10-09 16:34:21.069 2 DEBUG nova.network.os_vif_util [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:cf:4a,bridge_name='br-int',has_traffic_filtering=True,id=8bc917d5-4807-4e8c-8941-29822aa14c94,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bc917d5-48') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:34:21 compute-0 nova_compute[117331]: 2025-10-09 16:34:21.070 2 DEBUG nova.virt.libvirt.migration [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Updating guest XML with vif config: <interface type="ethernet">
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <mac address="fa:16:3e:a0:cf:4a"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <model type="virtio"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <mtu size="1442"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <target dev="tap8bc917d5-48"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]: </interface>
Oct 09 16:34:21 compute-0 nova_compute[117331]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Oct 09 16:34:21 compute-0 nova_compute[117331]: 2025-10-09 16:34:21.070 2 DEBUG nova.virt.libvirt.migration [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <name>instance-00000018</name>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <uuid>e08b0fc8-1e1e-4d97-b613-761b8f6f9674</uuid>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-487819226</nova:name>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:33:28</nova:creationTime>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:flavor name="tempest-watcher_flavor-1801535589" id="91ea4b87-742a-4f2b-9b5d-34de6e13f85a">
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:memory>1151</nova:memory>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:extraSpecs/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:34:21 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:user uuid="1c793380a6e945d69dacfd07f1f156f8">tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin</nova:user>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:project uuid="6d67ac3076434a4582e5db1ca7d043ff">tempest-TestExecuteWorkloadBalanceStrategy-2100042169</nova:project>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:port uuid="8bc917d5-4807-4e8c-8941-29822aa14c94">
Oct 09 16:34:21 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <memory unit="KiB">1178624</memory>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">1178624</currentMemory>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <system>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <entry name="serial">e08b0fc8-1e1e-4d97-b613-761b8f6f9674</entry>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <entry name="uuid">e08b0fc8-1e1e-4d97-b613-761b8f6f9674</entry>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </system>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <os>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </os>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <features>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </features>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk.config"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:a0:cf:4a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8bc917d5-48"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/console.log" append="off"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       </target>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/console.log" append="off"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </console>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </input>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <video>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </video>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]: </domain>
Oct 09 16:34:21 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Oct 09 16:34:21 compute-0 nova_compute[117331]: 2025-10-09 16:34:21.072 2 DEBUG nova.virt.libvirt.migration [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <name>instance-00000018</name>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <uuid>e08b0fc8-1e1e-4d97-b613-761b8f6f9674</uuid>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-487819226</nova:name>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:33:28</nova:creationTime>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:flavor name="tempest-watcher_flavor-1801535589" id="91ea4b87-742a-4f2b-9b5d-34de6e13f85a">
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:memory>1151</nova:memory>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:extraSpecs/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:34:21 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:user uuid="1c793380a6e945d69dacfd07f1f156f8">tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin</nova:user>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:project uuid="6d67ac3076434a4582e5db1ca7d043ff">tempest-TestExecuteWorkloadBalanceStrategy-2100042169</nova:project>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:port uuid="8bc917d5-4807-4e8c-8941-29822aa14c94">
Oct 09 16:34:21 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <memory unit="KiB">1178624</memory>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">1178624</currentMemory>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <system>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <entry name="serial">e08b0fc8-1e1e-4d97-b613-761b8f6f9674</entry>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <entry name="uuid">e08b0fc8-1e1e-4d97-b613-761b8f6f9674</entry>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </system>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <os>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </os>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <features>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </features>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk.config"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:a0:cf:4a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8bc917d5-48"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/console.log" append="off"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       </target>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/console.log" append="off"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </console>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </input>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <video>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </video>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]: </domain>
Oct 09 16:34:21 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Oct 09 16:34:21 compute-0 nova_compute[117331]: 2025-10-09 16:34:21.074 2 DEBUG nova.virt.libvirt.migration [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _update_pci_xml output xml=<domain type="kvm">
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <name>instance-00000018</name>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <uuid>e08b0fc8-1e1e-4d97-b613-761b8f6f9674</uuid>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-487819226</nova:name>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:33:28</nova:creationTime>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:flavor name="tempest-watcher_flavor-1801535589" id="91ea4b87-742a-4f2b-9b5d-34de6e13f85a">
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:memory>1151</nova:memory>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:extraSpecs/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:34:21 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:user uuid="1c793380a6e945d69dacfd07f1f156f8">tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin</nova:user>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:project uuid="6d67ac3076434a4582e5db1ca7d043ff">tempest-TestExecuteWorkloadBalanceStrategy-2100042169</nova:project>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <nova:port uuid="8bc917d5-4807-4e8c-8941-29822aa14c94">
Oct 09 16:34:21 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <memory unit="KiB">1178624</memory>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">1178624</currentMemory>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <system>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <entry name="serial">e08b0fc8-1e1e-4d97-b613-761b8f6f9674</entry>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <entry name="uuid">e08b0fc8-1e1e-4d97-b613-761b8f6f9674</entry>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </system>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <os>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </os>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <features>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </features>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/disk.config"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:a0:cf:4a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8bc917d5-48"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/console.log" append="off"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:34:21 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       </target>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674/console.log" append="off"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </console>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </input>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <video>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </video>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:34:21 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:34:21 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:34:21 compute-0 nova_compute[117331]: </domain>
Oct 09 16:34:21 compute-0 nova_compute[117331]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Oct 09 16:34:21 compute-0 nova_compute[117331]: 2025-10-09 16:34:21.075 2 DEBUG nova.virt.libvirt.driver [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Oct 09 16:34:21 compute-0 nova_compute[117331]: 2025-10-09 16:34:21.325 2 DEBUG nova.network.neutron [req-2d385307-45a0-42a1-a404-3e0707f7be07 req-c6161ab1-c8e7-4897-8b87-ef3dc674c09b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Updated VIF entry in instance network info cache for port 8bc917d5-4807-4e8c-8941-29822aa14c94. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Oct 09 16:34:21 compute-0 nova_compute[117331]: 2025-10-09 16:34:21.325 2 DEBUG nova.network.neutron [req-2d385307-45a0-42a1-a404-3e0707f7be07 req-c6161ab1-c8e7-4897-8b87-ef3dc674c09b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Updating instance_info_cache with network_info: [{"id": "8bc917d5-4807-4e8c-8941-29822aa14c94", "address": "fa:16:3e:a0:cf:4a", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bc917d5-48", "ovs_interfaceid": "8bc917d5-4807-4e8c-8941-29822aa14c94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:34:21 compute-0 nova_compute[117331]: 2025-10-09 16:34:21.562 2 DEBUG nova.virt.libvirt.migration [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Current None elapsed 1 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:34:21 compute-0 nova_compute[117331]: 2025-10-09 16:34:21.563 2 INFO nova.virt.libvirt.migration [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Increasing downtime to 50 ms after 0 sec elapsed time
Oct 09 16:34:21 compute-0 nova_compute[117331]: 2025-10-09 16:34:21.832 2 DEBUG oslo_concurrency.lockutils [req-2d385307-45a0-42a1-a404-3e0707f7be07 req-c6161ab1-c8e7-4897-8b87-ef3dc674c09b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-e08b0fc8-1e1e-4d97-b613-761b8f6f9674" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:34:21 compute-0 sshd-session[151258]: Failed password for invalid user mapred from 36.224.53.32 port 48768 ssh2
Oct 09 16:34:22 compute-0 nova_compute[117331]: 2025-10-09 16:34:22.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:22 compute-0 ovn_controller[19752]: 2025-10-09T16:34:22Z|00228|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 09 16:34:22 compute-0 nova_compute[117331]: 2025-10-09 16:34:22.578 2 INFO nova.virt.libvirt.driver [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Oct 09 16:34:22 compute-0 podman[151261]: 2025-10-09 16:34:22.8262847 +0000 UTC m=+0.062319880 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 09 16:34:23 compute-0 nova_compute[117331]: 2025-10-09 16:34:23.083 2 DEBUG nova.virt.libvirt.migration [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Current 50 elapsed 2 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:34:23 compute-0 nova_compute[117331]: 2025-10-09 16:34:23.084 2 DEBUG nova.virt.libvirt.migration [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Oct 09 16:34:23 compute-0 sshd-session[151258]: Connection closed by invalid user mapred 36.224.53.32 port 48768 [preauth]
Oct 09 16:34:23 compute-0 nova_compute[117331]: 2025-10-09 16:34:23.587 2 DEBUG nova.virt.libvirt.migration [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Current 50 elapsed 3 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:34:23 compute-0 nova_compute[117331]: 2025-10-09 16:34:23.587 2 DEBUG nova.virt.libvirt.migration [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Oct 09 16:34:23 compute-0 kernel: tap8bc917d5-48 (unregistering): left promiscuous mode
Oct 09 16:34:23 compute-0 NetworkManager[1028]: <info>  [1760027663.9654] device (tap8bc917d5-48): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:34:23 compute-0 nova_compute[117331]: 2025-10-09 16:34:23.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:23 compute-0 ovn_controller[19752]: 2025-10-09T16:34:23Z|00229|binding|INFO|Releasing lport 8bc917d5-4807-4e8c-8941-29822aa14c94 from this chassis (sb_readonly=0)
Oct 09 16:34:23 compute-0 ovn_controller[19752]: 2025-10-09T16:34:23Z|00230|binding|INFO|Setting lport 8bc917d5-4807-4e8c-8941-29822aa14c94 down in Southbound
Oct 09 16:34:23 compute-0 ovn_controller[19752]: 2025-10-09T16:34:23Z|00231|binding|INFO|Removing iface tap8bc917d5-48 ovn-installed in OVS
Oct 09 16:34:23 compute-0 nova_compute[117331]: 2025-10-09 16:34:23.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:23 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:23.984 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:cf:4a 10.100.0.4'], port_security=['fa:16:3e:a0:cf:4a 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '2bd8bf21-1f6b-42c9-9656-9a72fa8dcbf3'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e08b0fc8-1e1e-4d97-b613-761b8f6f9674', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54b37568-476a-40a0-b545-fe5401f85653', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d67ac3076434a4582e5db1ca7d043ff', 'neutron:revision_number': '10', 'neutron:security_group_ids': '94abd997-3903-47b3-abe3-e283a4232c96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc641581-8c4b-4cef-982e-bb0cf11a52ba, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=8bc917d5-4807-4e8c-8941-29822aa14c94) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:34:23 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:23.985 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 8bc917d5-4807-4e8c-8941-29822aa14c94 in datapath 54b37568-476a-40a0-b545-fe5401f85653 unbound from our chassis
Oct 09 16:34:23 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:23.987 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 54b37568-476a-40a0-b545-fe5401f85653
Oct 09 16:34:23 compute-0 nova_compute[117331]: 2025-10-09 16:34:23.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:24.004 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[dbceaa30-f594-4ae3-a69a-03ea1abfb0bf]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:34:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:24.029 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[0e1feaf2-f838-4522-ba84-c06e29718eda]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:34:24 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000018.scope: Deactivated successfully.
Oct 09 16:34:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:24.032 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[eee2fedc-c175-4ad9-83ce-054e5e441cf0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:34:24 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000018.scope: Consumed 14.993s CPU time.
Oct 09 16:34:24 compute-0 systemd-machined[77487]: Machine qemu-18-instance-00000018 terminated.
Oct 09 16:34:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:24.060 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f7c5aa-ec7e-4a53-801b-518a90d1728b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:34:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:24.074 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[f1339e16-7a8a-4f3f-9b02-36e68f4a2576]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54b37568-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:3a:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 255534, 'reachable_time': 20185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 151301, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:34:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:24.089 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[57fd61b1-1bcb-491c-b27b-b158f1c47631]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap54b37568-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 255546, 'tstamp': 255546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 151302, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap54b37568-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 255550, 'tstamp': 255550}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 151302, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:34:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:24.090 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54b37568-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:24.097 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54b37568-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:34:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:24.097 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:34:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:24.098 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap54b37568-40, col_values=(('external_ids', {'iface-id': '01dee844-02ac-4caa-80f7-8a16019cbd9d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:34:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:24.098 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:34:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:24.099 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3851ec62-25be-42f9-8278-d64d23cd006b]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-54b37568-476a-40a0-b545-fe5401f85653\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 54b37568-476a-40a0-b545-fe5401f85653\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.199 2 DEBUG nova.virt.libvirt.guest [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.199 2 INFO nova.virt.libvirt.driver [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Migration operation has completed
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.199 2 INFO nova.compute.manager [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] _post_live_migration() is started..
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.201 2 DEBUG nova.virt.libvirt.driver [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.201 2 DEBUG nova.virt.libvirt.driver [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.201 2 DEBUG nova.virt.libvirt.driver [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.211 2 WARNING neutronclient.v2_0.client [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.212 2 WARNING neutronclient.v2_0.client [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.263 2 DEBUG nova.compute.manager [req-190d2c36-6601-47ff-86b2-0c4aefc19cfa req-a60b26c2-3725-4e42-9dee-a884b518bcda ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received event network-vif-unplugged-8bc917d5-4807-4e8c-8941-29822aa14c94 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.263 2 DEBUG oslo_concurrency.lockutils [req-190d2c36-6601-47ff-86b2-0c4aefc19cfa req-a60b26c2-3725-4e42-9dee-a884b518bcda ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.263 2 DEBUG oslo_concurrency.lockutils [req-190d2c36-6601-47ff-86b2-0c4aefc19cfa req-a60b26c2-3725-4e42-9dee-a884b518bcda ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.263 2 DEBUG oslo_concurrency.lockutils [req-190d2c36-6601-47ff-86b2-0c4aefc19cfa req-a60b26c2-3725-4e42-9dee-a884b518bcda ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.264 2 DEBUG nova.compute.manager [req-190d2c36-6601-47ff-86b2-0c4aefc19cfa req-a60b26c2-3725-4e42-9dee-a884b518bcda ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] No waiting events found dispatching network-vif-unplugged-8bc917d5-4807-4e8c-8941-29822aa14c94 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.264 2 DEBUG nova.compute.manager [req-190d2c36-6601-47ff-86b2-0c4aefc19cfa req-a60b26c2-3725-4e42-9dee-a884b518bcda ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received event network-vif-unplugged-8bc917d5-4807-4e8c-8941-29822aa14c94 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.572 2 DEBUG nova.compute.manager [req-7ef31726-ebf2-419d-aa79-5ca72ad49bf8 req-a36aaa76-3394-4f7d-a2c7-9a3f89ec2590 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received event network-vif-unplugged-8bc917d5-4807-4e8c-8941-29822aa14c94 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.573 2 DEBUG oslo_concurrency.lockutils [req-7ef31726-ebf2-419d-aa79-5ca72ad49bf8 req-a36aaa76-3394-4f7d-a2c7-9a3f89ec2590 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.573 2 DEBUG oslo_concurrency.lockutils [req-7ef31726-ebf2-419d-aa79-5ca72ad49bf8 req-a36aaa76-3394-4f7d-a2c7-9a3f89ec2590 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.573 2 DEBUG oslo_concurrency.lockutils [req-7ef31726-ebf2-419d-aa79-5ca72ad49bf8 req-a36aaa76-3394-4f7d-a2c7-9a3f89ec2590 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.573 2 DEBUG nova.compute.manager [req-7ef31726-ebf2-419d-aa79-5ca72ad49bf8 req-a36aaa76-3394-4f7d-a2c7-9a3f89ec2590 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] No waiting events found dispatching network-vif-unplugged-8bc917d5-4807-4e8c-8941-29822aa14c94 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.573 2 DEBUG nova.compute.manager [req-7ef31726-ebf2-419d-aa79-5ca72ad49bf8 req-a36aaa76-3394-4f7d-a2c7-9a3f89ec2590 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received event network-vif-unplugged-8bc917d5-4807-4e8c-8941-29822aa14c94 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.708 2 DEBUG nova.network.neutron [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Activated binding for port 8bc917d5-4807-4e8c-8941-29822aa14c94 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.708 2 DEBUG nova.compute.manager [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "8bc917d5-4807-4e8c-8941-29822aa14c94", "address": "fa:16:3e:a0:cf:4a", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bc917d5-48", "ovs_interfaceid": "8bc917d5-4807-4e8c-8941-29822aa14c94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.709 2 DEBUG nova.virt.libvirt.vif [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:33:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-487819226',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-487819226',id=24,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:33:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=1151,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6d67ac3076434a4582e5db1ca7d043ff',ramdisk_id='',reservation_id='r-2km05070',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:34:03Z,user_data=None,user_id='1c793380a6e945d69dacfd07f1f156f8',uuid=e08b0fc8-1e1e-4d97-b613-761b8f6f9674,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8bc917d5-4807-4e8c-8941-29822aa14c94", "address": "fa:16:3e:a0:cf:4a", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bc917d5-48", "ovs_interfaceid": "8bc917d5-4807-4e8c-8941-29822aa14c94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.709 2 DEBUG nova.network.os_vif_util [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "8bc917d5-4807-4e8c-8941-29822aa14c94", "address": "fa:16:3e:a0:cf:4a", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bc917d5-48", "ovs_interfaceid": "8bc917d5-4807-4e8c-8941-29822aa14c94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.710 2 DEBUG nova.network.os_vif_util [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:cf:4a,bridge_name='br-int',has_traffic_filtering=True,id=8bc917d5-4807-4e8c-8941-29822aa14c94,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bc917d5-48') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.710 2 DEBUG os_vif [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:cf:4a,bridge_name='br-int',has_traffic_filtering=True,id=8bc917d5-4807-4e8c-8941-29822aa14c94,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bc917d5-48') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.713 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8bc917d5-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.717 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=cdbd09ad-618a-48f7-82a4-935b69b66548) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.721 2 INFO os_vif [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:cf:4a,bridge_name='br-int',has_traffic_filtering=True,id=8bc917d5-4807-4e8c-8941-29822aa14c94,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bc917d5-48')
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.721 2 DEBUG oslo_concurrency.lockutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.721 2 DEBUG oslo_concurrency.lockutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.722 2 DEBUG oslo_concurrency.lockutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.722 2 DEBUG nova.compute.manager [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.722 2 INFO nova.virt.libvirt.driver [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Deleting instance files /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674_del
Oct 09 16:34:24 compute-0 nova_compute[117331]: 2025-10-09 16:34:24.723 2 INFO nova.virt.libvirt.driver [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Deletion of /var/lib/nova/instances/e08b0fc8-1e1e-4d97-b613-761b8f6f9674_del complete
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.387 2 DEBUG nova.compute.manager [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received event network-vif-plugged-8bc917d5-4807-4e8c-8941-29822aa14c94 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.388 2 DEBUG oslo_concurrency.lockutils [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.389 2 DEBUG oslo_concurrency.lockutils [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.389 2 DEBUG oslo_concurrency.lockutils [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.390 2 DEBUG nova.compute.manager [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] No waiting events found dispatching network-vif-plugged-8bc917d5-4807-4e8c-8941-29822aa14c94 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.390 2 WARNING nova.compute.manager [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received unexpected event network-vif-plugged-8bc917d5-4807-4e8c-8941-29822aa14c94 for instance with vm_state active and task_state migrating.
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.391 2 DEBUG nova.compute.manager [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received event network-vif-unplugged-8bc917d5-4807-4e8c-8941-29822aa14c94 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.391 2 DEBUG oslo_concurrency.lockutils [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.392 2 DEBUG oslo_concurrency.lockutils [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.392 2 DEBUG oslo_concurrency.lockutils [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.392 2 DEBUG nova.compute.manager [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] No waiting events found dispatching network-vif-unplugged-8bc917d5-4807-4e8c-8941-29822aa14c94 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.393 2 DEBUG nova.compute.manager [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received event network-vif-unplugged-8bc917d5-4807-4e8c-8941-29822aa14c94 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.393 2 DEBUG nova.compute.manager [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received event network-vif-plugged-8bc917d5-4807-4e8c-8941-29822aa14c94 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.394 2 DEBUG oslo_concurrency.lockutils [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.394 2 DEBUG oslo_concurrency.lockutils [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.394 2 DEBUG oslo_concurrency.lockutils [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.395 2 DEBUG nova.compute.manager [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] No waiting events found dispatching network-vif-plugged-8bc917d5-4807-4e8c-8941-29822aa14c94 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.395 2 WARNING nova.compute.manager [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received unexpected event network-vif-plugged-8bc917d5-4807-4e8c-8941-29822aa14c94 for instance with vm_state active and task_state migrating.
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.396 2 DEBUG nova.compute.manager [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received event network-vif-plugged-8bc917d5-4807-4e8c-8941-29822aa14c94 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.396 2 DEBUG oslo_concurrency.lockutils [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.396 2 DEBUG oslo_concurrency.lockutils [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.397 2 DEBUG oslo_concurrency.lockutils [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.397 2 DEBUG nova.compute.manager [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] No waiting events found dispatching network-vif-plugged-8bc917d5-4807-4e8c-8941-29822aa14c94 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.398 2 WARNING nova.compute.manager [req-b717ab81-0385-4b21-a6cd-f846a85b9791 req-4132622b-a958-40ae-8c35-d1a6b069689b ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Received unexpected event network-vif-plugged-8bc917d5-4807-4e8c-8941-29822aa14c94 for instance with vm_state active and task_state migrating.
Oct 09 16:34:26 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:26.447 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:34:26 compute-0 nova_compute[117331]: 2025-10-09 16:34:26.821 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:34:26 compute-0 podman[151323]: 2025-10-09 16:34:26.834219317 +0000 UTC m=+0.067023540 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 09 16:34:27 compute-0 nova_compute[117331]: 2025-10-09 16:34:27.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:27 compute-0 sshd-session[151316]: Invalid user kube from 36.224.53.32 port 57986
Oct 09 16:34:28 compute-0 sshd-session[151316]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:34:28 compute-0 sshd-session[151316]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:34:29 compute-0 nova_compute[117331]: 2025-10-09 16:34:29.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:34:29 compute-0 sshd-session[151316]: Failed password for invalid user kube from 36.224.53.32 port 57986 ssh2
Oct 09 16:34:29 compute-0 nova_compute[117331]: 2025-10-09 16:34:29.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:29 compute-0 podman[127775]: time="2025-10-09T16:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:34:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:34:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3492 "" "Go-http-client/1.1"
Oct 09 16:34:30 compute-0 nova_compute[117331]: 2025-10-09 16:34:30.303 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:34:30 compute-0 nova_compute[117331]: 2025-10-09 16:34:30.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:34:30 compute-0 nova_compute[117331]: 2025-10-09 16:34:30.819 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:30 compute-0 nova_compute[117331]: 2025-10-09 16:34:30.819 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:30 compute-0 nova_compute[117331]: 2025-10-09 16:34:30.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:30 compute-0 nova_compute[117331]: 2025-10-09 16:34:30.820 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:34:30 compute-0 sshd-session[151316]: Connection closed by invalid user kube 36.224.53.32 port 57986 [preauth]
Oct 09 16:34:31 compute-0 openstack_network_exporter[129925]: ERROR   16:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:34:31 compute-0 openstack_network_exporter[129925]: ERROR   16:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:34:31 compute-0 openstack_network_exporter[129925]: ERROR   16:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:34:31 compute-0 openstack_network_exporter[129925]: ERROR   16:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:34:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:34:31 compute-0 openstack_network_exporter[129925]: ERROR   16:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:34:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:34:31 compute-0 nova_compute[117331]: 2025-10-09 16:34:31.867 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:34:31 compute-0 nova_compute[117331]: 2025-10-09 16:34:31.942 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:34:31 compute-0 nova_compute[117331]: 2025-10-09 16:34:31.943 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:34:31 compute-0 nova_compute[117331]: 2025-10-09 16:34:31.996 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:34:32 compute-0 nova_compute[117331]: 2025-10-09 16:34:32.126 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:34:32 compute-0 nova_compute[117331]: 2025-10-09 16:34:32.127 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:34:32 compute-0 nova_compute[117331]: 2025-10-09 16:34:32.142 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.015s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:34:32 compute-0 nova_compute[117331]: 2025-10-09 16:34:32.143 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5948MB free_disk=73.22816467285156GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:34:32 compute-0 nova_compute[117331]: 2025-10-09 16:34:32.143 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:32 compute-0 nova_compute[117331]: 2025-10-09 16:34:32.143 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:32 compute-0 nova_compute[117331]: 2025-10-09 16:34:32.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:32 compute-0 podman[151357]: 2025-10-09 16:34:32.820452069 +0000 UTC m=+0.052333292 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 09 16:34:32 compute-0 podman[151358]: 2025-10-09 16:34:32.853494639 +0000 UTC m=+0.083783292 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Oct 09 16:34:33 compute-0 nova_compute[117331]: 2025-10-09 16:34:33.160 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Migration for instance e08b0fc8-1e1e-4d97-b613-761b8f6f9674 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 09 16:34:33 compute-0 nova_compute[117331]: 2025-10-09 16:34:33.666 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Oct 09 16:34:33 compute-0 nova_compute[117331]: 2025-10-09 16:34:33.723 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance d2bb4f53-7374-48bc-8bd3-5de9d41372b6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 1151, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:34:33 compute-0 nova_compute[117331]: 2025-10-09 16:34:33.723 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Migration eaf79dcb-78b9-44ec-8f72-8a19ead6b011 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 1151, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:34:33 compute-0 nova_compute[117331]: 2025-10-09 16:34:33.724 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:34:33 compute-0 nova_compute[117331]: 2025-10-09 16:34:33.724 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1663MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:34:32 up 43 min,  0 user,  load average: 0.53, 0.61, 0.50\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_6d67ac3076434a4582e5db1ca7d043ff': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:34:33 compute-0 nova_compute[117331]: 2025-10-09 16:34:33.751 2 DEBUG oslo_concurrency.lockutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:33 compute-0 nova_compute[117331]: 2025-10-09 16:34:33.752 2 DEBUG oslo_concurrency.lockutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:33 compute-0 nova_compute[117331]: 2025-10-09 16:34:33.753 2 DEBUG oslo_concurrency.lockutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "e08b0fc8-1e1e-4d97-b613-761b8f6f9674-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:33 compute-0 nova_compute[117331]: 2025-10-09 16:34:33.885 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:34:34 compute-0 nova_compute[117331]: 2025-10-09 16:34:34.264 2 DEBUG oslo_concurrency.lockutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:34 compute-0 sshd-session[151348]: Invalid user logstash from 36.224.53.32 port 36798
Oct 09 16:34:34 compute-0 nova_compute[117331]: 2025-10-09 16:34:34.393 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:34:34 compute-0 nova_compute[117331]: 2025-10-09 16:34:34.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:34 compute-0 nova_compute[117331]: 2025-10-09 16:34:34.903 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:34:34 compute-0 nova_compute[117331]: 2025-10-09 16:34:34.904 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.761s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:34 compute-0 nova_compute[117331]: 2025-10-09 16:34:34.904 2 DEBUG oslo_concurrency.lockutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.641s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:34 compute-0 nova_compute[117331]: 2025-10-09 16:34:34.905 2 DEBUG oslo_concurrency.lockutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:34 compute-0 nova_compute[117331]: 2025-10-09 16:34:34.905 2 DEBUG nova.compute.resource_tracker [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:34:34 compute-0 sshd-session[151348]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:34:34 compute-0 sshd-session[151348]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:34:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:35.330 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:35.330 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:35.331 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:35 compute-0 nova_compute[117331]: 2025-10-09 16:34:35.906 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:34:35 compute-0 nova_compute[117331]: 2025-10-09 16:34:35.907 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:34:35 compute-0 nova_compute[117331]: 2025-10-09 16:34:35.907 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:34:35 compute-0 nova_compute[117331]: 2025-10-09 16:34:35.938 2 DEBUG oslo_concurrency.processutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:34:35 compute-0 nova_compute[117331]: 2025-10-09 16:34:35.995 2 DEBUG oslo_concurrency.processutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:34:35 compute-0 nova_compute[117331]: 2025-10-09 16:34:35.996 2 DEBUG oslo_concurrency.processutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:34:36 compute-0 nova_compute[117331]: 2025-10-09 16:34:36.048 2 DEBUG oslo_concurrency.processutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:34:36 compute-0 nova_compute[117331]: 2025-10-09 16:34:36.191 2 WARNING nova.virt.libvirt.driver [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:34:36 compute-0 nova_compute[117331]: 2025-10-09 16:34:36.193 2 DEBUG oslo_concurrency.processutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:34:36 compute-0 nova_compute[117331]: 2025-10-09 16:34:36.210 2 DEBUG oslo_concurrency.processutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.017s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:34:36 compute-0 nova_compute[117331]: 2025-10-09 16:34:36.211 2 DEBUG nova.compute.resource_tracker [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5946MB free_disk=73.22818374633789GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:34:36 compute-0 nova_compute[117331]: 2025-10-09 16:34:36.212 2 DEBUG oslo_concurrency.lockutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:36 compute-0 nova_compute[117331]: 2025-10-09 16:34:36.212 2 DEBUG oslo_concurrency.lockutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:36 compute-0 nova_compute[117331]: 2025-10-09 16:34:36.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:34:37 compute-0 sshd-session[151348]: Failed password for invalid user logstash from 36.224.53.32 port 36798 ssh2
Oct 09 16:34:37 compute-0 nova_compute[117331]: 2025-10-09 16:34:37.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:37 compute-0 nova_compute[117331]: 2025-10-09 16:34:37.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:34:37 compute-0 nova_compute[117331]: 2025-10-09 16:34:37.392 2 DEBUG oslo_concurrency.lockutils [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:37 compute-0 nova_compute[117331]: 2025-10-09 16:34:37.392 2 DEBUG oslo_concurrency.lockutils [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:37 compute-0 nova_compute[117331]: 2025-10-09 16:34:37.393 2 DEBUG oslo_concurrency.lockutils [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:37 compute-0 nova_compute[117331]: 2025-10-09 16:34:37.393 2 DEBUG oslo_concurrency.lockutils [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:37 compute-0 nova_compute[117331]: 2025-10-09 16:34:37.393 2 DEBUG oslo_concurrency.lockutils [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:37 compute-0 nova_compute[117331]: 2025-10-09 16:34:37.411 2 INFO nova.compute.manager [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Terminating instance
Oct 09 16:34:37 compute-0 nova_compute[117331]: 2025-10-09 16:34:37.786 2 DEBUG nova.compute.resource_tracker [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration for instance e08b0fc8-1e1e-4d97-b613-761b8f6f9674 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 09 16:34:37 compute-0 nova_compute[117331]: 2025-10-09 16:34:37.937 2 DEBUG nova.compute.manager [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 09 16:34:37 compute-0 kernel: tap4b893ed9-8b (unregistering): left promiscuous mode
Oct 09 16:34:37 compute-0 NetworkManager[1028]: <info>  [1760027677.9913] device (tap4b893ed9-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:34:38 compute-0 nova_compute[117331]: 2025-10-09 16:34:38.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:38 compute-0 ovn_controller[19752]: 2025-10-09T16:34:38Z|00232|binding|INFO|Releasing lport 4b893ed9-8bb2-41b7-9842-b530d7cc9ccd from this chassis (sb_readonly=0)
Oct 09 16:34:38 compute-0 ovn_controller[19752]: 2025-10-09T16:34:38Z|00233|binding|INFO|Setting lport 4b893ed9-8bb2-41b7-9842-b530d7cc9ccd down in Southbound
Oct 09 16:34:38 compute-0 ovn_controller[19752]: 2025-10-09T16:34:38Z|00234|binding|INFO|Removing iface tap4b893ed9-8b ovn-installed in OVS
Oct 09 16:34:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:38.010 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:f1:50 10.100.0.7'], port_security=['fa:16:3e:bc:f1:50 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd2bb4f53-7374-48bc-8bd3-5de9d41372b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54b37568-476a-40a0-b545-fe5401f85653', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d67ac3076434a4582e5db1ca7d043ff', 'neutron:revision_number': '5', 'neutron:security_group_ids': '94abd997-3903-47b3-abe3-e283a4232c96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc641581-8c4b-4cef-982e-bb0cf11a52ba, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=4b893ed9-8bb2-41b7-9842-b530d7cc9ccd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:34:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:38.011 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 4b893ed9-8bb2-41b7-9842-b530d7cc9ccd in datapath 54b37568-476a-40a0-b545-fe5401f85653 unbound from our chassis
Oct 09 16:34:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:38.013 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 54b37568-476a-40a0-b545-fe5401f85653, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:34:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:38.013 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[f24202e6-e29d-438b-a43e-dc9e5e4e2883]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:34:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:38.014 28613 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-54b37568-476a-40a0-b545-fe5401f85653 namespace which is not needed anymore
Oct 09 16:34:38 compute-0 nova_compute[117331]: 2025-10-09 16:34:38.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:38 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000019.scope: Deactivated successfully.
Oct 09 16:34:38 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000019.scope: Consumed 13.096s CPU time.
Oct 09 16:34:38 compute-0 systemd-machined[77487]: Machine qemu-19-instance-00000019 terminated.
Oct 09 16:34:38 compute-0 sshd-session[151348]: Connection closed by invalid user logstash 36.224.53.32 port 36798 [preauth]
Oct 09 16:34:38 compute-0 neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653[150952]: [NOTICE]   (150956) : haproxy version is 3.0.5-8e879a5
Oct 09 16:34:38 compute-0 neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653[150952]: [NOTICE]   (150956) : path to executable is /usr/sbin/haproxy
Oct 09 16:34:38 compute-0 neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653[150952]: [WARNING]  (150956) : Exiting Master process...
Oct 09 16:34:38 compute-0 podman[151427]: 2025-10-09 16:34:38.128823735 +0000 UTC m=+0.026750950 container kill 1610d2110c9c8a73ff26120c621cc4a6687ef8af17d13c702380a2f01b516bf0 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 16:34:38 compute-0 neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653[150952]: [ALERT]    (150956) : Current worker (150958) exited with code 143 (Terminated)
Oct 09 16:34:38 compute-0 neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653[150952]: [WARNING]  (150956) : All workers exited. Exiting... (0)
Oct 09 16:34:38 compute-0 systemd[1]: libpod-1610d2110c9c8a73ff26120c621cc4a6687ef8af17d13c702380a2f01b516bf0.scope: Deactivated successfully.
Oct 09 16:34:38 compute-0 podman[151443]: 2025-10-09 16:34:38.171724028 +0000 UTC m=+0.022832026 container died 1610d2110c9c8a73ff26120c621cc4a6687ef8af17d13c702380a2f01b516bf0 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Oct 09 16:34:38 compute-0 nova_compute[117331]: 2025-10-09 16:34:38.196 2 INFO nova.virt.libvirt.driver [-] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Instance destroyed successfully.
Oct 09 16:34:38 compute-0 nova_compute[117331]: 2025-10-09 16:34:38.197 2 DEBUG nova.objects.instance [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lazy-loading 'resources' on Instance uuid d2bb4f53-7374-48bc-8bd3-5de9d41372b6 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:34:38 compute-0 nova_compute[117331]: 2025-10-09 16:34:38.341 2 DEBUG nova.compute.resource_tracker [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Oct 09 16:34:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1610d2110c9c8a73ff26120c621cc4a6687ef8af17d13c702380a2f01b516bf0-userdata-shm.mount: Deactivated successfully.
Oct 09 16:34:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ada7c0d9f4e9a0f8f130479c33c5498c4cc90405e1695e99353c1264445925f-merged.mount: Deactivated successfully.
Oct 09 16:34:38 compute-0 unix_chkpwd[151493]: password check failed for user (root)
Oct 09 16:34:38 compute-0 sshd-session[151401]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 09 16:34:38 compute-0 podman[151443]: 2025-10-09 16:34:38.430728641 +0000 UTC m=+0.281836619 container cleanup 1610d2110c9c8a73ff26120c621cc4a6687ef8af17d13c702380a2f01b516bf0 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007)
Oct 09 16:34:38 compute-0 systemd[1]: libpod-conmon-1610d2110c9c8a73ff26120c621cc4a6687ef8af17d13c702380a2f01b516bf0.scope: Deactivated successfully.
Oct 09 16:34:38 compute-0 podman[151451]: 2025-10-09 16:34:38.47004516 +0000 UTC m=+0.309799258 container remove 1610d2110c9c8a73ff26120c621cc4a6687ef8af17d13c702380a2f01b516bf0 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:34:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:38.478 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[5d97ef93-896b-4e7d-8909-221de76a8314]: (4, ("Thu Oct  9 04:34:38 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653 (1610d2110c9c8a73ff26120c621cc4a6687ef8af17d13c702380a2f01b516bf0)\n1610d2110c9c8a73ff26120c621cc4a6687ef8af17d13c702380a2f01b516bf0\nThu Oct  9 04:34:38 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-54b37568-476a-40a0-b545-fe5401f85653 (1610d2110c9c8a73ff26120c621cc4a6687ef8af17d13c702380a2f01b516bf0)\n1610d2110c9c8a73ff26120c621cc4a6687ef8af17d13c702380a2f01b516bf0\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:34:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:38.480 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a7413c02-02fc-402e-be11-65c643611517]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:34:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:38.480 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/54b37568-476a-40a0-b545-fe5401f85653.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:34:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:38.481 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[12c52de7-08c4-4b8a-9e73-f96cf599fe43]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:34:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:38.482 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54b37568-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:34:38 compute-0 nova_compute[117331]: 2025-10-09 16:34:38.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:38 compute-0 kernel: tap54b37568-40: left promiscuous mode
Oct 09 16:34:38 compute-0 nova_compute[117331]: 2025-10-09 16:34:38.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:38.502 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d4344819-3e79-464b-8d0f-fa974c302002]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:34:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:38.533 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[9a81bda6-c9eb-4f12-9a6e-a594ec4f852d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:34:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:38.534 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[537a4e63-e66e-4f72-915e-43e10e70ba16]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:34:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:38.552 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3cce86b7-72e9-4f68-a6f6-90f77b727039]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 255528, 'reachable_time': 39058, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 151499, 'error': None, 'target': 'ovnmeta-54b37568-476a-40a0-b545-fe5401f85653', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:34:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:38.554 28727 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-54b37568-476a-40a0-b545-fe5401f85653 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 09 16:34:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:34:38.554 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[be1dd184-b1af-4844-9a59-2fba54e6959b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:34:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d54b37568\x2d476a\x2d40a0\x2db545\x2dfe5401f85653.mount: Deactivated successfully.
Oct 09 16:34:38 compute-0 nova_compute[117331]: 2025-10-09 16:34:38.653 2 DEBUG nova.compute.resource_tracker [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Instance d2bb4f53-7374-48bc-8bd3-5de9d41372b6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 1151, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:34:38 compute-0 nova_compute[117331]: 2025-10-09 16:34:38.654 2 DEBUG nova.compute.resource_tracker [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration eaf79dcb-78b9-44ec-8f72-8a19ead6b011 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 1151, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:34:38 compute-0 nova_compute[117331]: 2025-10-09 16:34:38.654 2 DEBUG nova.compute.resource_tracker [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:34:38 compute-0 nova_compute[117331]: 2025-10-09 16:34:38.654 2 DEBUG nova.compute.resource_tracker [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1663MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:34:36 up 43 min,  0 user,  load average: 0.53, 0.61, 0.50\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_6d67ac3076434a4582e5db1ca7d043ff': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:34:38 compute-0 nova_compute[117331]: 2025-10-09 16:34:38.710 2 DEBUG nova.compute.provider_tree [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:34:38 compute-0 nova_compute[117331]: 2025-10-09 16:34:38.775 2 DEBUG nova.compute.manager [req-baa5a1ff-2391-4e64-9ddc-f4daeb77e36c req-4842f9e6-8a66-4881-9ee9-7f07fc3b1c74 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Received event network-vif-unplugged-4b893ed9-8bb2-41b7-9842-b530d7cc9ccd external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:34:38 compute-0 nova_compute[117331]: 2025-10-09 16:34:38.776 2 DEBUG oslo_concurrency.lockutils [req-baa5a1ff-2391-4e64-9ddc-f4daeb77e36c req-4842f9e6-8a66-4881-9ee9-7f07fc3b1c74 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:38 compute-0 nova_compute[117331]: 2025-10-09 16:34:38.776 2 DEBUG oslo_concurrency.lockutils [req-baa5a1ff-2391-4e64-9ddc-f4daeb77e36c req-4842f9e6-8a66-4881-9ee9-7f07fc3b1c74 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:38 compute-0 nova_compute[117331]: 2025-10-09 16:34:38.776 2 DEBUG oslo_concurrency.lockutils [req-baa5a1ff-2391-4e64-9ddc-f4daeb77e36c req-4842f9e6-8a66-4881-9ee9-7f07fc3b1c74 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:38 compute-0 nova_compute[117331]: 2025-10-09 16:34:38.776 2 DEBUG nova.compute.manager [req-baa5a1ff-2391-4e64-9ddc-f4daeb77e36c req-4842f9e6-8a66-4881-9ee9-7f07fc3b1c74 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] No waiting events found dispatching network-vif-unplugged-4b893ed9-8bb2-41b7-9842-b530d7cc9ccd pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:34:38 compute-0 nova_compute[117331]: 2025-10-09 16:34:38.777 2 DEBUG nova.compute.manager [req-baa5a1ff-2391-4e64-9ddc-f4daeb77e36c req-4842f9e6-8a66-4881-9ee9-7f07fc3b1c74 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Received event network-vif-unplugged-4b893ed9-8bb2-41b7-9842-b530d7cc9ccd for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.141 2 DEBUG nova.virt.libvirt.vif [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:33:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-1819620832',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-1819620832',id=25,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:33:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=1151,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d67ac3076434a4582e5db1ca7d043ff',ramdisk_id='',reservation_id='r-bt02xp9l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-2100042169-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:33:54Z,user_data=None,user_id='1c793380a6e945d69dacfd07f1f156f8',uuid=d2bb4f53-7374-48bc-8bd3-5de9d41372b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "address": "fa:16:3e:bc:f1:50", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b893ed9-8b", "ovs_interfaceid": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.141 2 DEBUG nova.network.os_vif_util [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converting VIF {"id": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "address": "fa:16:3e:bc:f1:50", "network": {"id": "54b37568-476a-40a0-b545-fe5401f85653", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-563305466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "036a4e356fb34effb6775ffe5bd9a19f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b893ed9-8b", "ovs_interfaceid": "4b893ed9-8bb2-41b7-9842-b530d7cc9ccd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.142 2 DEBUG nova.network.os_vif_util [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:f1:50,bridge_name='br-int',has_traffic_filtering=True,id=4b893ed9-8bb2-41b7-9842-b530d7cc9ccd,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b893ed9-8b') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.143 2 DEBUG os_vif [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:f1:50,bridge_name='br-int',has_traffic_filtering=True,id=4b893ed9-8bb2-41b7-9842-b530d7cc9ccd,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b893ed9-8b') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.147 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b893ed9-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.183 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=e1cc9b99-c1a9-42a7-9b32-f899b05a38ad) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.189 2 INFO os_vif [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:f1:50,bridge_name='br-int',has_traffic_filtering=True,id=4b893ed9-8bb2-41b7-9842-b530d7cc9ccd,network=Network(54b37568-476a-40a0-b545-fe5401f85653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b893ed9-8b')
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.190 2 INFO nova.virt.libvirt.driver [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Deleting instance files /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6_del
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.190 2 INFO nova.virt.libvirt.driver [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Deletion of /var/lib/nova/instances/d2bb4f53-7374-48bc-8bd3-5de9d41372b6_del complete
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.217 2 DEBUG nova.scheduler.client.report [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.731 2 INFO nova.compute.manager [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Took 1.79 seconds to destroy the instance on the hypervisor.
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.732 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.732 2 DEBUG nova.compute.manager [-] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.732 2 DEBUG nova.network.neutron [-] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.733 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.745 2 DEBUG nova.compute.resource_tracker [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.746 2 DEBUG oslo_concurrency.lockutils [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.534s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:39 compute-0 nova_compute[117331]: 2025-10-09 16:34:39.764 2 INFO nova.compute.manager [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Oct 09 16:34:40 compute-0 nova_compute[117331]: 2025-10-09 16:34:40.245 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:34:40 compute-0 nova_compute[117331]: 2025-10-09 16:34:40.301 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:34:40 compute-0 sshd-session[151401]: Failed password for root from 193.46.255.7 port 39904 ssh2
Oct 09 16:34:40 compute-0 nova_compute[117331]: 2025-10-09 16:34:40.847 2 DEBUG nova.compute.manager [req-0b63c985-c9c0-4e60-acd1-07b05d3301b1 req-f43701fa-2203-4f14-b3c9-8326941f1dc5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Received event network-vif-unplugged-4b893ed9-8bb2-41b7-9842-b530d7cc9ccd external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:34:40 compute-0 nova_compute[117331]: 2025-10-09 16:34:40.848 2 DEBUG oslo_concurrency.lockutils [req-0b63c985-c9c0-4e60-acd1-07b05d3301b1 req-f43701fa-2203-4f14-b3c9-8326941f1dc5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:40 compute-0 nova_compute[117331]: 2025-10-09 16:34:40.848 2 DEBUG oslo_concurrency.lockutils [req-0b63c985-c9c0-4e60-acd1-07b05d3301b1 req-f43701fa-2203-4f14-b3c9-8326941f1dc5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:40 compute-0 nova_compute[117331]: 2025-10-09 16:34:40.848 2 DEBUG oslo_concurrency.lockutils [req-0b63c985-c9c0-4e60-acd1-07b05d3301b1 req-f43701fa-2203-4f14-b3c9-8326941f1dc5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:40 compute-0 nova_compute[117331]: 2025-10-09 16:34:40.849 2 DEBUG nova.compute.manager [req-0b63c985-c9c0-4e60-acd1-07b05d3301b1 req-f43701fa-2203-4f14-b3c9-8326941f1dc5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] No waiting events found dispatching network-vif-unplugged-4b893ed9-8bb2-41b7-9842-b530d7cc9ccd pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:34:40 compute-0 nova_compute[117331]: 2025-10-09 16:34:40.849 2 DEBUG nova.compute.manager [req-0b63c985-c9c0-4e60-acd1-07b05d3301b1 req-f43701fa-2203-4f14-b3c9-8326941f1dc5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Received event network-vif-unplugged-4b893ed9-8bb2-41b7-9842-b530d7cc9ccd for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:34:40 compute-0 nova_compute[117331]: 2025-10-09 16:34:40.883 2 INFO nova.scheduler.client.report [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Deleted allocation for migration eaf79dcb-78b9-44ec-8f72-8a19ead6b011
Oct 09 16:34:40 compute-0 nova_compute[117331]: 2025-10-09 16:34:40.883 2 DEBUG nova.virt.libvirt.driver [None req-76aeb176-65ec-4856-ad96-194e2d74752e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e08b0fc8-1e1e-4d97-b613-761b8f6f9674] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Oct 09 16:34:41 compute-0 nova_compute[117331]: 2025-10-09 16:34:41.558 2 DEBUG nova.network.neutron [-] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:34:41 compute-0 sshd-session[151487]: Invalid user grafana from 36.224.53.32 port 43960
Oct 09 16:34:41 compute-0 podman[151501]: 2025-10-09 16:34:41.866456017 +0000 UTC m=+0.088293194 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git)
Oct 09 16:34:41 compute-0 podman[151502]: 2025-10-09 16:34:41.889666205 +0000 UTC m=+0.104448578 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 09 16:34:42 compute-0 nova_compute[117331]: 2025-10-09 16:34:42.068 2 INFO nova.compute.manager [-] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Took 2.34 seconds to deallocate network for instance.
Oct 09 16:34:42 compute-0 nova_compute[117331]: 2025-10-09 16:34:42.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:42 compute-0 nova_compute[117331]: 2025-10-09 16:34:42.592 2 DEBUG oslo_concurrency.lockutils [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:34:42 compute-0 nova_compute[117331]: 2025-10-09 16:34:42.593 2 DEBUG oslo_concurrency.lockutils [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:34:42 compute-0 nova_compute[117331]: 2025-10-09 16:34:42.631 2 DEBUG nova.compute.provider_tree [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:34:42 compute-0 unix_chkpwd[151546]: password check failed for user (root)
Oct 09 16:34:42 compute-0 nova_compute[117331]: 2025-10-09 16:34:42.909 2 DEBUG nova.compute.manager [req-2f3fcd07-49ba-48bd-8f38-b671bba8b54b req-1c233408-1664-4b30-8f4f-b8cc28faa57e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d2bb4f53-7374-48bc-8bd3-5de9d41372b6] Received event network-vif-deleted-4b893ed9-8bb2-41b7-9842-b530d7cc9ccd external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:34:42 compute-0 sshd-session[151487]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:34:42 compute-0 sshd-session[151487]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:34:43 compute-0 nova_compute[117331]: 2025-10-09 16:34:43.137 2 DEBUG nova.scheduler.client.report [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:34:43 compute-0 nova_compute[117331]: 2025-10-09 16:34:43.645 2 DEBUG oslo_concurrency.lockutils [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.052s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:43 compute-0 nova_compute[117331]: 2025-10-09 16:34:43.682 2 INFO nova.scheduler.client.report [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Deleted allocations for instance d2bb4f53-7374-48bc-8bd3-5de9d41372b6
Oct 09 16:34:44 compute-0 nova_compute[117331]: 2025-10-09 16:34:44.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:44 compute-0 sshd-session[151401]: Failed password for root from 193.46.255.7 port 39904 ssh2
Oct 09 16:34:44 compute-0 nova_compute[117331]: 2025-10-09 16:34:44.706 2 DEBUG oslo_concurrency.lockutils [None req-acc96a33-a29a-4c7c-9eec-e3964fafaa5c 1c793380a6e945d69dacfd07f1f156f8 6d67ac3076434a4582e5db1ca7d043ff - - default default] Lock "d2bb4f53-7374-48bc-8bd3-5de9d41372b6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.313s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:34:44 compute-0 sshd-session[151487]: Failed password for invalid user grafana from 36.224.53.32 port 43960 ssh2
Oct 09 16:34:44 compute-0 unix_chkpwd[151547]: password check failed for user (root)
Oct 09 16:34:45 compute-0 sshd-session[151487]: Connection closed by invalid user grafana 36.224.53.32 port 43960 [preauth]
Oct 09 16:34:47 compute-0 sshd-session[151401]: Failed password for root from 193.46.255.7 port 39904 ssh2
Oct 09 16:34:47 compute-0 nova_compute[117331]: 2025-10-09 16:34:47.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:49 compute-0 sshd-session[151401]: Received disconnect from 193.46.255.7 port 39904:11:  [preauth]
Oct 09 16:34:49 compute-0 sshd-session[151401]: Disconnected from authenticating user root 193.46.255.7 port 39904 [preauth]
Oct 09 16:34:49 compute-0 sshd-session[151401]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 09 16:34:49 compute-0 sshd-session[151548]: Invalid user hadoop from 36.224.53.32 port 50710
Oct 09 16:34:49 compute-0 nova_compute[117331]: 2025-10-09 16:34:49.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:49 compute-0 sshd-session[151548]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:34:49 compute-0 sshd-session[151548]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=36.224.53.32
Oct 09 16:34:49 compute-0 unix_chkpwd[151552]: password check failed for user (root)
Oct 09 16:34:49 compute-0 sshd-session[151550]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 09 16:34:51 compute-0 sshd-session[151548]: Failed password for invalid user hadoop from 36.224.53.32 port 50710 ssh2
Oct 09 16:34:52 compute-0 sshd-session[151550]: Failed password for root from 193.46.255.7 port 55970 ssh2
Oct 09 16:34:52 compute-0 nova_compute[117331]: 2025-10-09 16:34:52.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:53 compute-0 sshd-session[151548]: Connection closed by invalid user hadoop 36.224.53.32 port 50710 [preauth]
Oct 09 16:34:53 compute-0 podman[151553]: 2025-10-09 16:34:53.834860962 +0000 UTC m=+0.059992506 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=multipathd, io.buildah.version=1.41.4)
Oct 09 16:34:54 compute-0 nova_compute[117331]: 2025-10-09 16:34:54.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:54 compute-0 unix_chkpwd[151574]: password check failed for user (root)
Oct 09 16:34:56 compute-0 sshd-session[151550]: Failed password for root from 193.46.255.7 port 55970 ssh2
Oct 09 16:34:56 compute-0 unix_chkpwd[151575]: password check failed for user (root)
Oct 09 16:34:56 compute-0 nova_compute[117331]: 2025-10-09 16:34:56.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:57 compute-0 nova_compute[117331]: 2025-10-09 16:34:57.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:57 compute-0 podman[151576]: 2025-10-09 16:34:57.826086936 +0000 UTC m=+0.056198875 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:34:58 compute-0 sshd-session[151550]: Failed password for root from 193.46.255.7 port 55970 ssh2
Oct 09 16:34:59 compute-0 nova_compute[117331]: 2025-10-09 16:34:59.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:34:59 compute-0 podman[127775]: time="2025-10-09T16:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:34:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:34:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3031 "" "Go-http-client/1.1"
Oct 09 16:35:00 compute-0 sshd-session[151550]: Received disconnect from 193.46.255.7 port 55970:11:  [preauth]
Oct 09 16:35:00 compute-0 sshd-session[151550]: Disconnected from authenticating user root 193.46.255.7 port 55970 [preauth]
Oct 09 16:35:00 compute-0 sshd-session[151550]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 09 16:35:01 compute-0 openstack_network_exporter[129925]: ERROR   16:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:35:01 compute-0 openstack_network_exporter[129925]: ERROR   16:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:35:01 compute-0 openstack_network_exporter[129925]: ERROR   16:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:35:01 compute-0 openstack_network_exporter[129925]: ERROR   16:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:35:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:35:01 compute-0 openstack_network_exporter[129925]: ERROR   16:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:35:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:35:01 compute-0 unix_chkpwd[151603]: password check failed for user (root)
Oct 09 16:35:01 compute-0 sshd-session[151601]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 09 16:35:02 compute-0 nova_compute[117331]: 2025-10-09 16:35:02.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:03 compute-0 sshd-session[151601]: Failed password for root from 193.46.255.7 port 50866 ssh2
Oct 09 16:35:03 compute-0 podman[151605]: 2025-10-09 16:35:03.834874995 +0000 UTC m=+0.063146436 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct 09 16:35:03 compute-0 podman[151606]: 2025-10-09 16:35:03.83910436 +0000 UTC m=+0.063607572 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 16:35:04 compute-0 nova_compute[117331]: 2025-10-09 16:35:04.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:05 compute-0 unix_chkpwd[151644]: password check failed for user (root)
Oct 09 16:35:07 compute-0 nova_compute[117331]: 2025-10-09 16:35:07.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:07 compute-0 sshd-session[151601]: Failed password for root from 193.46.255.7 port 50866 ssh2
Oct 09 16:35:08 compute-0 unix_chkpwd[151645]: password check failed for user (root)
Oct 09 16:35:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:08.387 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:10:2c 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5b6d650a71ca47d58b10f5fd874c4898', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f19947fc-c2ef-4762-adfc-471bb24a038d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7d2fd2fc-650d-4abc-8268-a14a8cdfd51e) old=Port_Binding(mac=['fa:16:3e:2a:10:2c'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5b6d650a71ca47d58b10f5fd874c4898', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:35:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:08.388 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7d2fd2fc-650d-4abc-8268-a14a8cdfd51e in datapath cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 updated
Oct 09 16:35:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:08.389 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:35:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:08.391 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[80fb773e-02bd-47fe-8c9f-6deb1822e3e6]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:09 compute-0 nova_compute[117331]: 2025-10-09 16:35:09.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:10 compute-0 sshd-session[151601]: Failed password for root from 193.46.255.7 port 50866 ssh2
Oct 09 16:35:12 compute-0 nova_compute[117331]: 2025-10-09 16:35:12.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:12 compute-0 sshd-session[151601]: Received disconnect from 193.46.255.7 port 50866:11:  [preauth]
Oct 09 16:35:12 compute-0 sshd-session[151601]: Disconnected from authenticating user root 193.46.255.7 port 50866 [preauth]
Oct 09 16:35:12 compute-0 sshd-session[151601]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 09 16:35:12 compute-0 podman[151646]: 2025-10-09 16:35:12.822214403 +0000 UTC m=+0.055184173 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1755695350, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 09 16:35:12 compute-0 podman[151647]: 2025-10-09 16:35:12.854559789 +0000 UTC m=+0.082605323 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 09 16:35:14 compute-0 nova_compute[117331]: 2025-10-09 16:35:14.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:15.355 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:46:a7 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-5680d7ca-d08a-4358-8ab2-ab154ca870a2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5680d7ca-d08a-4358-8ab2-ab154ca870a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a60acfe52e4b4b7f912654a59f0978b7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=52ce4592-b468-4396-9c03-279491d95f58, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d9000b20-7be5-4763-95d0-76ee5c2be13b) old=Port_Binding(mac=['fa:16:3e:a1:46:a7'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-5680d7ca-d08a-4358-8ab2-ab154ca870a2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5680d7ca-d08a-4358-8ab2-ab154ca870a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a60acfe52e4b4b7f912654a59f0978b7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:35:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:15.357 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d9000b20-7be5-4763-95d0-76ee5c2be13b in datapath 5680d7ca-d08a-4358-8ab2-ab154ca870a2 updated
Oct 09 16:35:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:15.358 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5680d7ca-d08a-4358-8ab2-ab154ca870a2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:35:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:15.359 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[14366e3c-d84f-403c-913d-7e7d551d7eee]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:17 compute-0 nova_compute[117331]: 2025-10-09 16:35:17.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:18 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:18.275 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:35:18 compute-0 nova_compute[117331]: 2025-10-09 16:35:18.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:18 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:18.277 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:35:19 compute-0 nova_compute[117331]: 2025-10-09 16:35:19.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:22 compute-0 nova_compute[117331]: 2025-10-09 16:35:22.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:23 compute-0 nova_compute[117331]: 2025-10-09 16:35:23.183 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "85c9d87b-0e28-425f-b54e-c14066ba6918" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:35:23 compute-0 nova_compute[117331]: 2025-10-09 16:35:23.184 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:35:23 compute-0 nova_compute[117331]: 2025-10-09 16:35:23.690 2 DEBUG nova.compute.manager [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:35:24 compute-0 nova_compute[117331]: 2025-10-09 16:35:24.250 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:35:24 compute-0 nova_compute[117331]: 2025-10-09 16:35:24.251 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:35:24 compute-0 nova_compute[117331]: 2025-10-09 16:35:24.259 2 DEBUG nova.virt.hardware [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:35:24 compute-0 nova_compute[117331]: 2025-10-09 16:35:24.260 2 INFO nova.compute.claims [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:35:24 compute-0 nova_compute[117331]: 2025-10-09 16:35:24.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:24 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:24.279 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:35:24 compute-0 podman[151691]: 2025-10-09 16:35:24.832473476 +0000 UTC m=+0.062566208 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, config_id=multipathd)
Oct 09 16:35:25 compute-0 nova_compute[117331]: 2025-10-09 16:35:25.345 2 DEBUG nova.compute.provider_tree [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:35:25 compute-0 nova_compute[117331]: 2025-10-09 16:35:25.854 2 DEBUG nova.scheduler.client.report [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:35:26 compute-0 nova_compute[117331]: 2025-10-09 16:35:26.365 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.114s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:35:26 compute-0 nova_compute[117331]: 2025-10-09 16:35:26.366 2 DEBUG nova.compute.manager [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:35:26 compute-0 nova_compute[117331]: 2025-10-09 16:35:26.875 2 DEBUG nova.compute.manager [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:35:26 compute-0 nova_compute[117331]: 2025-10-09 16:35:26.876 2 DEBUG nova.network.neutron [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:35:26 compute-0 nova_compute[117331]: 2025-10-09 16:35:26.876 2 WARNING neutronclient.v2_0.client [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:35:26 compute-0 nova_compute[117331]: 2025-10-09 16:35:26.877 2 WARNING neutronclient.v2_0.client [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:35:27 compute-0 nova_compute[117331]: 2025-10-09 16:35:27.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:27 compute-0 nova_compute[117331]: 2025-10-09 16:35:27.385 2 INFO nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:35:27 compute-0 nova_compute[117331]: 2025-10-09 16:35:27.891 2 DEBUG nova.compute.manager [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:35:28 compute-0 nova_compute[117331]: 2025-10-09 16:35:28.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:35:28 compute-0 nova_compute[117331]: 2025-10-09 16:35:28.317 2 DEBUG nova.network.neutron [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Successfully created port: 2cf33271-ebfe-4d2e-9668-8905e0bc34b8 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:35:28 compute-0 podman[151710]: 2025-10-09 16:35:28.845293136 +0000 UTC m=+0.069451636 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:35:28 compute-0 nova_compute[117331]: 2025-10-09 16:35:28.906 2 DEBUG nova.compute.manager [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:35:28 compute-0 nova_compute[117331]: 2025-10-09 16:35:28.908 2 DEBUG nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:35:28 compute-0 nova_compute[117331]: 2025-10-09 16:35:28.909 2 INFO nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Creating image(s)
Oct 09 16:35:28 compute-0 nova_compute[117331]: 2025-10-09 16:35:28.910 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "/var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:35:28 compute-0 nova_compute[117331]: 2025-10-09 16:35:28.911 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "/var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:35:28 compute-0 nova_compute[117331]: 2025-10-09 16:35:28.912 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "/var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:35:28 compute-0 nova_compute[117331]: 2025-10-09 16:35:28.913 2 DEBUG oslo_utils.imageutils.format_inspector [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:35:28 compute-0 nova_compute[117331]: 2025-10-09 16:35:28.919 2 DEBUG oslo_utils.imageutils.format_inspector [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:35:28 compute-0 nova_compute[117331]: 2025-10-09 16:35:28.921 2 DEBUG oslo_concurrency.processutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.011 2 DEBUG oslo_concurrency.processutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.013 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.013 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.014 2 DEBUG oslo_utils.imageutils.format_inspector [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.018 2 DEBUG oslo_utils.imageutils.format_inspector [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.019 2 DEBUG oslo_concurrency.processutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.080 2 DEBUG oslo_concurrency.processutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.081 2 DEBUG oslo_concurrency.processutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.116 2 DEBUG oslo_concurrency.processutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk 1073741824" returned: 0 in 0.034s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.118 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.105s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.120 2 DEBUG oslo_concurrency.processutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.174 2 DEBUG oslo_concurrency.processutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.175 2 DEBUG nova.virt.disk.api [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Checking if we can resize image /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.175 2 DEBUG oslo_concurrency.processutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.225 2 DEBUG oslo_concurrency.processutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.226 2 DEBUG nova.virt.disk.api [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Cannot resize image /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.226 2 DEBUG nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.227 2 DEBUG nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Ensure instance console log exists: /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.227 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.228 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.228 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.503 2 DEBUG nova.network.neutron [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Successfully updated port: 2cf33271-ebfe-4d2e-9668-8905e0bc34b8 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.584 2 DEBUG nova.compute.manager [req-288a7b56-3aab-4565-86cf-07dca12cd39d req-33277d2d-832e-47a0-8834-cf9408997eb1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received event network-changed-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.584 2 DEBUG nova.compute.manager [req-288a7b56-3aab-4565-86cf-07dca12cd39d req-33277d2d-832e-47a0-8834-cf9408997eb1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Refreshing instance network info cache due to event network-changed-2cf33271-ebfe-4d2e-9668-8905e0bc34b8. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.584 2 DEBUG oslo_concurrency.lockutils [req-288a7b56-3aab-4565-86cf-07dca12cd39d req-33277d2d-832e-47a0-8834-cf9408997eb1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-85c9d87b-0e28-425f-b54e-c14066ba6918" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.585 2 DEBUG oslo_concurrency.lockutils [req-288a7b56-3aab-4565-86cf-07dca12cd39d req-33277d2d-832e-47a0-8834-cf9408997eb1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-85c9d87b-0e28-425f-b54e-c14066ba6918" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:35:29 compute-0 nova_compute[117331]: 2025-10-09 16:35:29.585 2 DEBUG nova.network.neutron [req-288a7b56-3aab-4565-86cf-07dca12cd39d req-33277d2d-832e-47a0-8834-cf9408997eb1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Refreshing network info cache for port 2cf33271-ebfe-4d2e-9668-8905e0bc34b8 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:35:29 compute-0 podman[127775]: time="2025-10-09T16:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:35:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:35:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3034 "" "Go-http-client/1.1"
Oct 09 16:35:30 compute-0 nova_compute[117331]: 2025-10-09 16:35:30.011 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "refresh_cache-85c9d87b-0e28-425f-b54e-c14066ba6918" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:35:30 compute-0 nova_compute[117331]: 2025-10-09 16:35:30.091 2 WARNING neutronclient.v2_0.client [req-288a7b56-3aab-4565-86cf-07dca12cd39d req-33277d2d-832e-47a0-8834-cf9408997eb1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:35:30 compute-0 nova_compute[117331]: 2025-10-09 16:35:30.280 2 DEBUG nova.network.neutron [req-288a7b56-3aab-4565-86cf-07dca12cd39d req-33277d2d-832e-47a0-8834-cf9408997eb1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:35:30 compute-0 nova_compute[117331]: 2025-10-09 16:35:30.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:35:30 compute-0 ovn_controller[19752]: 2025-10-09T16:35:30Z|00235|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 09 16:35:30 compute-0 nova_compute[117331]: 2025-10-09 16:35:30.429 2 DEBUG nova.network.neutron [req-288a7b56-3aab-4565-86cf-07dca12cd39d req-33277d2d-832e-47a0-8834-cf9408997eb1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:35:30 compute-0 nova_compute[117331]: 2025-10-09 16:35:30.938 2 DEBUG oslo_concurrency.lockutils [req-288a7b56-3aab-4565-86cf-07dca12cd39d req-33277d2d-832e-47a0-8834-cf9408997eb1 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-85c9d87b-0e28-425f-b54e-c14066ba6918" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:35:30 compute-0 nova_compute[117331]: 2025-10-09 16:35:30.938 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquired lock "refresh_cache-85c9d87b-0e28-425f-b54e-c14066ba6918" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:35:30 compute-0 nova_compute[117331]: 2025-10-09 16:35:30.938 2 DEBUG nova.network.neutron [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:35:31 compute-0 nova_compute[117331]: 2025-10-09 16:35:31.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:35:31 compute-0 openstack_network_exporter[129925]: ERROR   16:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:35:31 compute-0 openstack_network_exporter[129925]: ERROR   16:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:35:31 compute-0 openstack_network_exporter[129925]: ERROR   16:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:35:31 compute-0 openstack_network_exporter[129925]: ERROR   16:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:35:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:35:31 compute-0 openstack_network_exporter[129925]: ERROR   16:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:35:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:35:31 compute-0 nova_compute[117331]: 2025-10-09 16:35:31.866 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:35:31 compute-0 nova_compute[117331]: 2025-10-09 16:35:31.866 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:35:31 compute-0 nova_compute[117331]: 2025-10-09 16:35:31.866 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:35:31 compute-0 nova_compute[117331]: 2025-10-09 16:35:31.866 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:35:31 compute-0 nova_compute[117331]: 2025-10-09 16:35:31.895 2 DEBUG nova.network.neutron [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.032 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.032 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.050 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.017s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.050 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6184MB free_disk=73.249267578125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.050 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.051 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.074 2 WARNING neutronclient.v2_0.client [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.220 2 DEBUG nova.network.neutron [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Updating instance_info_cache with network_info: [{"id": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "address": "fa:16:3e:5a:42:08", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf33271-eb", "ovs_interfaceid": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.726 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Releasing lock "refresh_cache-85c9d87b-0e28-425f-b54e-c14066ba6918" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.727 2 DEBUG nova.compute.manager [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Instance network_info: |[{"id": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "address": "fa:16:3e:5a:42:08", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf33271-eb", "ovs_interfaceid": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.729 2 DEBUG nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Start _get_guest_xml network_info=[{"id": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "address": "fa:16:3e:5a:42:08", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf33271-eb", "ovs_interfaceid": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.732 2 WARNING nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.733 2 DEBUG nova.virt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteWorkloadStabilizationStrategy-server-512585471', uuid='85c9d87b-0e28-425f-b54e-c14066ba6918'), owner=OwnerMeta(userid='685b4924c5a04af7ae6f4a328bb50f14', username='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin', projectid='a60acfe52e4b4b7f912654a59f0978b7', projectname='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "address": "fa:16:3e:5a:42:08", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf33271-eb", "ovs_interfaceid": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760027732.7338915) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.737 2 DEBUG nova.virt.libvirt.host [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.737 2 DEBUG nova.virt.libvirt.host [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.740 2 DEBUG nova.virt.libvirt.host [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.741 2 DEBUG nova.virt.libvirt.host [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.741 2 DEBUG nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.741 2 DEBUG nova.virt.hardware [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.741 2 DEBUG nova.virt.hardware [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.742 2 DEBUG nova.virt.hardware [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.742 2 DEBUG nova.virt.hardware [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.742 2 DEBUG nova.virt.hardware [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.742 2 DEBUG nova.virt.hardware [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.743 2 DEBUG nova.virt.hardware [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.743 2 DEBUG nova.virt.hardware [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.743 2 DEBUG nova.virt.hardware [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.743 2 DEBUG nova.virt.hardware [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.744 2 DEBUG nova.virt.hardware [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.747 2 DEBUG nova.virt.libvirt.vif [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:35:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-512585471',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-5125854',id=26,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a60acfe52e4b4b7f912654a59f0978b7',ramdisk_id='',reservation_id='r-52wfk6ph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader,manager',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:35:27Z,user_data=None,user_id='685b4924c5a04af7ae6f4a328bb50f14',uuid=85c9d87b-0e28-425f-b54e-c14066ba6918,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "address": "fa:16:3e:5a:42:08", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf33271-eb", "ovs_interfaceid": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.747 2 DEBUG nova.network.os_vif_util [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Converting VIF {"id": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "address": "fa:16:3e:5a:42:08", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf33271-eb", "ovs_interfaceid": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.748 2 DEBUG nova.network.os_vif_util [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:42:08,bridge_name='br-int',has_traffic_filtering=True,id=2cf33271-ebfe-4d2e-9668-8905e0bc34b8,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf33271-eb') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:35:32 compute-0 nova_compute[117331]: 2025-10-09 16:35:32.748 2 DEBUG nova.objects.instance [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 85c9d87b-0e28-425f-b54e-c14066ba6918 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.104 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance 85c9d87b-0e28-425f-b54e-c14066ba6918 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.105 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.105 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:35:32 up 44 min,  0 user,  load average: 0.19, 0.50, 0.46\n', 'num_instances': '1', 'num_vm_building': '1', 'num_task_spawning': '1', 'num_os_type_None': '1', 'num_proj_a60acfe52e4b4b7f912654a59f0978b7': '1', 'io_workload': '1'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.147 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.279 2 DEBUG nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:35:33 compute-0 nova_compute[117331]:   <uuid>85c9d87b-0e28-425f-b54e-c14066ba6918</uuid>
Oct 09 16:35:33 compute-0 nova_compute[117331]:   <name>instance-0000001a</name>
Oct 09 16:35:33 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:35:33 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:35:33 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadStabilizationStrategy-server-512585471</nova:name>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:35:32</nova:creationTime>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:35:33 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:35:33 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:35:33 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:35:33 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:35:33 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:35:33 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:35:33 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:35:33 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:35:33 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:35:33 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:35:33 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:35:33 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:35:33 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:35:33 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:35:33 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:35:33 compute-0 nova_compute[117331]:         <nova:user uuid="685b4924c5a04af7ae6f4a328bb50f14">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin</nova:user>
Oct 09 16:35:33 compute-0 nova_compute[117331]:         <nova:project uuid="a60acfe52e4b4b7f912654a59f0978b7">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673</nova:project>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:35:33 compute-0 nova_compute[117331]:         <nova:port uuid="2cf33271-ebfe-4d2e-9668-8905e0bc34b8">
Oct 09 16:35:33 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:35:33 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:35:33 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <system>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <entry name="serial">85c9d87b-0e28-425f-b54e-c14066ba6918</entry>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <entry name="uuid">85c9d87b-0e28-425f-b54e-c14066ba6918</entry>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     </system>
Oct 09 16:35:33 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:35:33 compute-0 nova_compute[117331]:   <os>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:   </os>
Oct 09 16:35:33 compute-0 nova_compute[117331]:   <features>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:   </features>
Oct 09 16:35:33 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:35:33 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:35:33 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk.config"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:5a:42:08"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <target dev="tap2cf33271-eb"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/console.log" append="off"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <video>
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     </video>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:35:33 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:35:33 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:35:33 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:35:33 compute-0 nova_compute[117331]: </domain>
Oct 09 16:35:33 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.280 2 DEBUG nova.compute.manager [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Preparing to wait for external event network-vif-plugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.280 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.281 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.281 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.282 2 DEBUG nova.virt.libvirt.vif [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:35:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-512585471',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-5125854',id=26,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a60acfe52e4b4b7f912654a59f0978b7',ramdisk_id='',reservation_id='r-52wfk6ph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader,manager',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:35:27Z,user_data=None,user_id='685b4924c5a04af7ae6f4a328bb50f14',uuid=85c9d87b-0e28-425f-b54e-c14066ba6918,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "address": "fa:16:3e:5a:42:08", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf33271-eb", "ovs_interfaceid": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.282 2 DEBUG nova.network.os_vif_util [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Converting VIF {"id": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "address": "fa:16:3e:5a:42:08", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf33271-eb", "ovs_interfaceid": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.282 2 DEBUG nova.network.os_vif_util [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:42:08,bridge_name='br-int',has_traffic_filtering=True,id=2cf33271-ebfe-4d2e-9668-8905e0bc34b8,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf33271-eb') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.283 2 DEBUG os_vif [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:42:08,bridge_name='br-int',has_traffic_filtering=True,id=2cf33271-ebfe-4d2e-9668-8905e0bc34b8,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf33271-eb') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.284 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.284 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.285 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'a957bd17-dde4-5f7c-956b-6d702e740ca9', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.290 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2cf33271-eb, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.290 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap2cf33271-eb, col_values=(('qos', UUID('657c5bea-8359-44ce-afcd-e6544c668832')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.291 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap2cf33271-eb, col_values=(('external_ids', {'iface-id': '2cf33271-ebfe-4d2e-9668-8905e0bc34b8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5a:42:08', 'vm-uuid': '85c9d87b-0e28-425f-b54e-c14066ba6918'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:33 compute-0 NetworkManager[1028]: <info>  [1760027733.2948] manager: (tap2cf33271-eb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.303 2 INFO os_vif [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:42:08,bridge_name='br-int',has_traffic_filtering=True,id=2cf33271-ebfe-4d2e-9668-8905e0bc34b8,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf33271-eb')
Oct 09 16:35:33 compute-0 nova_compute[117331]: 2025-10-09 16:35:33.656 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:35:34 compute-0 nova_compute[117331]: 2025-10-09 16:35:34.163 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:35:34 compute-0 nova_compute[117331]: 2025-10-09 16:35:34.164 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.113s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:35:34 compute-0 podman[151753]: 2025-10-09 16:35:34.833580091 +0000 UTC m=+0.064389509 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 09 16:35:34 compute-0 podman[151754]: 2025-10-09 16:35:34.833869671 +0000 UTC m=+0.063247873 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 09 16:35:35 compute-0 nova_compute[117331]: 2025-10-09 16:35:35.029 2 DEBUG nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:35:35 compute-0 nova_compute[117331]: 2025-10-09 16:35:35.029 2 DEBUG nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:35:35 compute-0 nova_compute[117331]: 2025-10-09 16:35:35.029 2 DEBUG nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] No VIF found with MAC fa:16:3e:5a:42:08, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:35:35 compute-0 nova_compute[117331]: 2025-10-09 16:35:35.030 2 INFO nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Using config drive
Oct 09 16:35:35 compute-0 nova_compute[117331]: 2025-10-09 16:35:35.160 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:35:35 compute-0 nova_compute[117331]: 2025-10-09 16:35:35.161 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:35:35 compute-0 nova_compute[117331]: 2025-10-09 16:35:35.161 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:35:35 compute-0 nova_compute[117331]: 2025-10-09 16:35:35.161 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:35:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:35.331 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:35:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:35.332 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:35:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:35.332 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:35:35 compute-0 nova_compute[117331]: 2025-10-09 16:35:35.591 2 WARNING neutronclient.v2_0.client [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:35:36 compute-0 nova_compute[117331]: 2025-10-09 16:35:36.393 2 INFO nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Creating config drive at /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk.config
Oct 09 16:35:36 compute-0 nova_compute[117331]: 2025-10-09 16:35:36.399 2 DEBUG oslo_concurrency.processutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpgoa3zhkl execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:35:36 compute-0 nova_compute[117331]: 2025-10-09 16:35:36.529 2 DEBUG oslo_concurrency.processutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpgoa3zhkl" returned: 0 in 0.130s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:35:36 compute-0 kernel: tap2cf33271-eb: entered promiscuous mode
Oct 09 16:35:36 compute-0 NetworkManager[1028]: <info>  [1760027736.5999] manager: (tap2cf33271-eb): new Tun device (/org/freedesktop/NetworkManager/Devices/88)
Oct 09 16:35:36 compute-0 ovn_controller[19752]: 2025-10-09T16:35:36Z|00236|binding|INFO|Claiming lport 2cf33271-ebfe-4d2e-9668-8905e0bc34b8 for this chassis.
Oct 09 16:35:36 compute-0 ovn_controller[19752]: 2025-10-09T16:35:36Z|00237|binding|INFO|2cf33271-ebfe-4d2e-9668-8905e0bc34b8: Claiming fa:16:3e:5a:42:08 10.100.0.7
Oct 09 16:35:36 compute-0 nova_compute[117331]: 2025-10-09 16:35:36.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:36 compute-0 nova_compute[117331]: 2025-10-09 16:35:36.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:36 compute-0 nova_compute[117331]: 2025-10-09 16:35:36.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.615 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:42:08 10.100.0.7'], port_security=['fa:16:3e:5a:42:08 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '85c9d87b-0e28-425f-b54e-c14066ba6918', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a60acfe52e4b4b7f912654a59f0978b7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c47ae156-dc3a-4eb3-8eb2-126be8ba5497', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f19947fc-c2ef-4762-adfc-471bb24a038d, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=2cf33271-ebfe-4d2e-9668-8905e0bc34b8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.616 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 2cf33271-ebfe-4d2e-9668-8905e0bc34b8 in datapath cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 bound to our chassis
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.619 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.632 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[6053aca0-41b4-4f1e-8723-17fce495037b]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.632 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcf3aa351-41 in ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.634 139687 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcf3aa351-40 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.635 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0f6679-d4cf-46a1-9200-e6926bd66333]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:36 compute-0 systemd-udevd[151809]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.635 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf0380b-419a-47ec-ab81-26049aa12819]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:36 compute-0 systemd-machined[77487]: New machine qemu-20-instance-0000001a.
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.645 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[ce72441a-9ac8-4dd5-9c3e-a26ff8408e75]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:36 compute-0 NetworkManager[1028]: <info>  [1760027736.6514] device (tap2cf33271-eb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:35:36 compute-0 NetworkManager[1028]: <info>  [1760027736.6520] device (tap2cf33271-eb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:35:36 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-0000001a.
Oct 09 16:35:36 compute-0 ovn_controller[19752]: 2025-10-09T16:35:36Z|00238|binding|INFO|Setting lport 2cf33271-ebfe-4d2e-9668-8905e0bc34b8 ovn-installed in OVS
Oct 09 16:35:36 compute-0 ovn_controller[19752]: 2025-10-09T16:35:36Z|00239|binding|INFO|Setting lport 2cf33271-ebfe-4d2e-9668-8905e0bc34b8 up in Southbound
Oct 09 16:35:36 compute-0 nova_compute[117331]: 2025-10-09 16:35:36.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:36 compute-0 nova_compute[117331]: 2025-10-09 16:35:36.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.674 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[2b4adf2c-7641-4dfe-a691-49995c81a9f4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.700 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4a06f4-5d53-46a8-9ab8-978166d2eb66]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:36 compute-0 systemd-udevd[151814]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.705 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[fbfae4e8-10f2-44df-b1b7-5efada8be03f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:36 compute-0 NetworkManager[1028]: <info>  [1760027736.7067] manager: (tapcf3aa351-40): new Veth device (/org/freedesktop/NetworkManager/Devices/89)
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.740 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[6161fd5d-2aac-40c7-9628-e5d2a127379f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.743 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[d42735cb-9bad-45b9-88cd-8e4f9c81db32]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:36 compute-0 NetworkManager[1028]: <info>  [1760027736.7634] device (tapcf3aa351-40): carrier: link connected
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.768 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[0984c309-f30c-4c9e-9e0d-2d81ea031fba]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.787 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0412b7-e8c2-4d11-813a-ada4cac651c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf3aa351-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:10:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 268036, 'reachable_time': 31043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 151842, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.799 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[942b9a63-9847-4dce-a04c-09aab07d0967]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:102c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 268036, 'tstamp': 268036}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 151843, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.811 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[92643058-d111-4eaf-9198-01969ff3956e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf3aa351-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:10:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 268036, 'reachable_time': 31043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 151844, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.843 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[e54ae38c-06a0-48c4-b71e-27de4438d35f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.901 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf84832-2001-4477-a223-bb0bbaa92461]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.902 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf3aa351-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.902 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.903 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf3aa351-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:35:36 compute-0 nova_compute[117331]: 2025-10-09 16:35:36.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:36 compute-0 NetworkManager[1028]: <info>  [1760027736.9051] manager: (tapcf3aa351-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Oct 09 16:35:36 compute-0 kernel: tapcf3aa351-40: entered promiscuous mode
Oct 09 16:35:36 compute-0 nova_compute[117331]: 2025-10-09 16:35:36.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.907 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf3aa351-40, col_values=(('external_ids', {'iface-id': '7d2fd2fc-650d-4abc-8268-a14a8cdfd51e'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:35:36 compute-0 nova_compute[117331]: 2025-10-09 16:35:36.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:36 compute-0 ovn_controller[19752]: 2025-10-09T16:35:36Z|00240|binding|INFO|Releasing lport 7d2fd2fc-650d-4abc-8268-a14a8cdfd51e from this chassis (sb_readonly=0)
Oct 09 16:35:36 compute-0 nova_compute[117331]: 2025-10-09 16:35:36.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.920 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[fc89078f-1119-4484-a1e0-7b1d723abc4d]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.920 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.921 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.921 28613 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.921 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.922 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[7c7a14d5-e5ff-4829-9f84-647bfefc35b4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.922 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.923 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e9f4e3-7bd6-441e-91a0-7368c300d1b5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.923 28613 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: global
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     log         /dev/log local0 debug
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     log-tag     haproxy-metadata-proxy-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     user        root
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     group       root
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     maxconn     1024
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     pidfile     /var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     daemon
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: defaults
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     log global
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     mode http
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     option httplog
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     option dontlognull
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     option http-server-close
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     option forwardfor
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     retries                 3
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     timeout http-request    30s
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     timeout connect         30s
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     timeout client          32s
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     timeout server          32s
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     timeout http-keep-alive 30s
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: listen listener
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     bind 169.254.169.254:80
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:     http-request add-header X-OVN-Network-ID cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 09 16:35:36 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:36.923 28613 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'env', 'PROCESS_TAG=haproxy-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 09 16:35:37 compute-0 nova_compute[117331]: 2025-10-09 16:35:37.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:37 compute-0 podman[151883]: 2025-10-09 16:35:37.282748716 +0000 UTC m=+0.020463709 image pull 4e15c189a95c5dcd1203c76fe304de5c8f8616da9a258ca168cc54a517f49b87 38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 09 16:35:37 compute-0 nova_compute[117331]: 2025-10-09 16:35:37.448 2 DEBUG nova.compute.manager [req-57649010-9602-43e4-b916-20904a0b834d req-1127f7dd-6a5f-4aa4-84e2-1751fcd029fd ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received event network-vif-plugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:35:37 compute-0 nova_compute[117331]: 2025-10-09 16:35:37.448 2 DEBUG oslo_concurrency.lockutils [req-57649010-9602-43e4-b916-20904a0b834d req-1127f7dd-6a5f-4aa4-84e2-1751fcd029fd ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:35:37 compute-0 nova_compute[117331]: 2025-10-09 16:35:37.449 2 DEBUG oslo_concurrency.lockutils [req-57649010-9602-43e4-b916-20904a0b834d req-1127f7dd-6a5f-4aa4-84e2-1751fcd029fd ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:35:37 compute-0 nova_compute[117331]: 2025-10-09 16:35:37.449 2 DEBUG oslo_concurrency.lockutils [req-57649010-9602-43e4-b916-20904a0b834d req-1127f7dd-6a5f-4aa4-84e2-1751fcd029fd ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:35:37 compute-0 nova_compute[117331]: 2025-10-09 16:35:37.449 2 DEBUG nova.compute.manager [req-57649010-9602-43e4-b916-20904a0b834d req-1127f7dd-6a5f-4aa4-84e2-1751fcd029fd ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Processing event network-vif-plugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:35:37 compute-0 nova_compute[117331]: 2025-10-09 16:35:37.501 2 DEBUG nova.compute.manager [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:35:37 compute-0 nova_compute[117331]: 2025-10-09 16:35:37.505 2 DEBUG nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:35:37 compute-0 nova_compute[117331]: 2025-10-09 16:35:37.508 2 INFO nova.virt.libvirt.driver [-] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Instance spawned successfully.
Oct 09 16:35:37 compute-0 nova_compute[117331]: 2025-10-09 16:35:37.508 2 DEBUG nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:35:37 compute-0 podman[151883]: 2025-10-09 16:35:37.573297755 +0000 UTC m=+0.311012698 container create 228a7db78720641cf3f58f4b57cfac2aacc540265ef0151826f25be2f2b1f4ac (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251007)
Oct 09 16:35:37 compute-0 systemd[1]: Started libpod-conmon-228a7db78720641cf3f58f4b57cfac2aacc540265ef0151826f25be2f2b1f4ac.scope.
Oct 09 16:35:37 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bf16e2b8255c61faf23874b9e665f78971a169aa66798d3eb4b7358e16e9a5f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 16:35:37 compute-0 podman[151883]: 2025-10-09 16:35:37.836568621 +0000 UTC m=+0.574283604 container init 228a7db78720641cf3f58f4b57cfac2aacc540265ef0151826f25be2f2b1f4ac (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 16:35:37 compute-0 podman[151883]: 2025-10-09 16:35:37.84350711 +0000 UTC m=+0.581222063 container start 228a7db78720641cf3f58f4b57cfac2aacc540265ef0151826f25be2f2b1f4ac (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest)
Oct 09 16:35:37 compute-0 neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8[151899]: [NOTICE]   (151903) : New worker (151905) forked
Oct 09 16:35:37 compute-0 neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8[151899]: [NOTICE]   (151903) : Loading success.
Oct 09 16:35:38 compute-0 nova_compute[117331]: 2025-10-09 16:35:38.063 2 DEBUG nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:35:38 compute-0 nova_compute[117331]: 2025-10-09 16:35:38.064 2 DEBUG nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:35:38 compute-0 nova_compute[117331]: 2025-10-09 16:35:38.065 2 DEBUG nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:35:38 compute-0 nova_compute[117331]: 2025-10-09 16:35:38.065 2 DEBUG nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:35:38 compute-0 nova_compute[117331]: 2025-10-09 16:35:38.066 2 DEBUG nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:35:38 compute-0 nova_compute[117331]: 2025-10-09 16:35:38.067 2 DEBUG nova.virt.libvirt.driver [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:35:38 compute-0 nova_compute[117331]: 2025-10-09 16:35:38.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:38 compute-0 nova_compute[117331]: 2025-10-09 16:35:38.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:35:38 compute-0 nova_compute[117331]: 2025-10-09 16:35:38.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:35:38 compute-0 nova_compute[117331]: 2025-10-09 16:35:38.890 2 INFO nova.compute.manager [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Took 9.98 seconds to spawn the instance on the hypervisor.
Oct 09 16:35:38 compute-0 nova_compute[117331]: 2025-10-09 16:35:38.891 2 DEBUG nova.compute.manager [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:35:39 compute-0 nova_compute[117331]: 2025-10-09 16:35:39.425 2 INFO nova.compute.manager [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Took 15.22 seconds to build instance.
Oct 09 16:35:39 compute-0 nova_compute[117331]: 2025-10-09 16:35:39.499 2 DEBUG nova.compute.manager [req-0e391742-070c-4a80-85df-69e8f2a3c21e req-abefd244-1718-434f-b137-d48eaf4f19a3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received event network-vif-plugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:35:39 compute-0 nova_compute[117331]: 2025-10-09 16:35:39.499 2 DEBUG oslo_concurrency.lockutils [req-0e391742-070c-4a80-85df-69e8f2a3c21e req-abefd244-1718-434f-b137-d48eaf4f19a3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:35:39 compute-0 nova_compute[117331]: 2025-10-09 16:35:39.499 2 DEBUG oslo_concurrency.lockutils [req-0e391742-070c-4a80-85df-69e8f2a3c21e req-abefd244-1718-434f-b137-d48eaf4f19a3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:35:39 compute-0 nova_compute[117331]: 2025-10-09 16:35:39.500 2 DEBUG oslo_concurrency.lockutils [req-0e391742-070c-4a80-85df-69e8f2a3c21e req-abefd244-1718-434f-b137-d48eaf4f19a3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:35:39 compute-0 nova_compute[117331]: 2025-10-09 16:35:39.500 2 DEBUG nova.compute.manager [req-0e391742-070c-4a80-85df-69e8f2a3c21e req-abefd244-1718-434f-b137-d48eaf4f19a3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] No waiting events found dispatching network-vif-plugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:35:39 compute-0 nova_compute[117331]: 2025-10-09 16:35:39.500 2 WARNING nova.compute.manager [req-0e391742-070c-4a80-85df-69e8f2a3c21e req-abefd244-1718-434f-b137-d48eaf4f19a3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received unexpected event network-vif-plugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 for instance with vm_state active and task_state None.
Oct 09 16:35:39 compute-0 nova_compute[117331]: 2025-10-09 16:35:39.931 2 DEBUG oslo_concurrency.lockutils [None req-cb9cdc7e-df8e-4d08-b875-ea44c8ddd4b4 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.747s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:35:42 compute-0 nova_compute[117331]: 2025-10-09 16:35:42.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:43 compute-0 nova_compute[117331]: 2025-10-09 16:35:43.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:43 compute-0 podman[151914]: 2025-10-09 16:35:43.851688457 +0000 UTC m=+0.081562063 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal)
Oct 09 16:35:43 compute-0 podman[151915]: 2025-10-09 16:35:43.861690914 +0000 UTC m=+0.089631358 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest)
Oct 09 16:35:44 compute-0 nova_compute[117331]: 2025-10-09 16:35:44.429 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "036b9f54-5759-43e6-9666-ec39d96c1729" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:35:44 compute-0 nova_compute[117331]: 2025-10-09 16:35:44.429 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:35:44 compute-0 nova_compute[117331]: 2025-10-09 16:35:44.934 2 DEBUG nova.compute.manager [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:35:45 compute-0 nova_compute[117331]: 2025-10-09 16:35:45.487 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:35:45 compute-0 nova_compute[117331]: 2025-10-09 16:35:45.488 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:35:45 compute-0 nova_compute[117331]: 2025-10-09 16:35:45.494 2 DEBUG nova.virt.hardware [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:35:45 compute-0 nova_compute[117331]: 2025-10-09 16:35:45.495 2 INFO nova.compute.claims [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:35:46 compute-0 nova_compute[117331]: 2025-10-09 16:35:46.563 2 DEBUG nova.compute.provider_tree [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:35:47 compute-0 nova_compute[117331]: 2025-10-09 16:35:47.070 2 DEBUG nova.scheduler.client.report [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:35:47 compute-0 nova_compute[117331]: 2025-10-09 16:35:47.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:47 compute-0 nova_compute[117331]: 2025-10-09 16:35:47.579 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.091s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:35:47 compute-0 nova_compute[117331]: 2025-10-09 16:35:47.580 2 DEBUG nova.compute.manager [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:35:48 compute-0 nova_compute[117331]: 2025-10-09 16:35:48.091 2 DEBUG nova.compute.manager [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:35:48 compute-0 nova_compute[117331]: 2025-10-09 16:35:48.092 2 DEBUG nova.network.neutron [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:35:48 compute-0 nova_compute[117331]: 2025-10-09 16:35:48.092 2 WARNING neutronclient.v2_0.client [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:35:48 compute-0 nova_compute[117331]: 2025-10-09 16:35:48.092 2 WARNING neutronclient.v2_0.client [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:35:48 compute-0 nova_compute[117331]: 2025-10-09 16:35:48.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:48 compute-0 nova_compute[117331]: 2025-10-09 16:35:48.598 2 INFO nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:35:48 compute-0 nova_compute[117331]: 2025-10-09 16:35:48.720 2 DEBUG nova.network.neutron [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Successfully created port: 19fbb24b-757a-4067-a3e1-7a2c326cb886 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:35:49 compute-0 nova_compute[117331]: 2025-10-09 16:35:49.106 2 DEBUG nova.compute.manager [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:35:49 compute-0 nova_compute[117331]: 2025-10-09 16:35:49.328 2 DEBUG nova.network.neutron [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Successfully updated port: 19fbb24b-757a-4067-a3e1-7a2c326cb886 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:35:49 compute-0 nova_compute[117331]: 2025-10-09 16:35:49.381 2 DEBUG nova.compute.manager [req-5bf80928-806a-421d-a4be-0c0d7eb7abb4 req-8d0f92d6-697b-4be8-8475-0368e9400fb3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Received event network-changed-19fbb24b-757a-4067-a3e1-7a2c326cb886 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:35:49 compute-0 nova_compute[117331]: 2025-10-09 16:35:49.381 2 DEBUG nova.compute.manager [req-5bf80928-806a-421d-a4be-0c0d7eb7abb4 req-8d0f92d6-697b-4be8-8475-0368e9400fb3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Refreshing instance network info cache due to event network-changed-19fbb24b-757a-4067-a3e1-7a2c326cb886. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:35:49 compute-0 nova_compute[117331]: 2025-10-09 16:35:49.382 2 DEBUG oslo_concurrency.lockutils [req-5bf80928-806a-421d-a4be-0c0d7eb7abb4 req-8d0f92d6-697b-4be8-8475-0368e9400fb3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-036b9f54-5759-43e6-9666-ec39d96c1729" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:35:49 compute-0 nova_compute[117331]: 2025-10-09 16:35:49.382 2 DEBUG oslo_concurrency.lockutils [req-5bf80928-806a-421d-a4be-0c0d7eb7abb4 req-8d0f92d6-697b-4be8-8475-0368e9400fb3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-036b9f54-5759-43e6-9666-ec39d96c1729" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:35:49 compute-0 nova_compute[117331]: 2025-10-09 16:35:49.382 2 DEBUG nova.network.neutron [req-5bf80928-806a-421d-a4be-0c0d7eb7abb4 req-8d0f92d6-697b-4be8-8475-0368e9400fb3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Refreshing network info cache for port 19fbb24b-757a-4067-a3e1-7a2c326cb886 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:35:49 compute-0 ovn_controller[19752]: 2025-10-09T16:35:49Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5a:42:08 10.100.0.7
Oct 09 16:35:49 compute-0 ovn_controller[19752]: 2025-10-09T16:35:49Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5a:42:08 10.100.0.7
Oct 09 16:35:49 compute-0 nova_compute[117331]: 2025-10-09 16:35:49.833 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "refresh_cache-036b9f54-5759-43e6-9666-ec39d96c1729" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:35:49 compute-0 nova_compute[117331]: 2025-10-09 16:35:49.886 2 WARNING neutronclient.v2_0.client [req-5bf80928-806a-421d-a4be-0c0d7eb7abb4 req-8d0f92d6-697b-4be8-8475-0368e9400fb3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:35:49 compute-0 nova_compute[117331]: 2025-10-09 16:35:49.955 2 DEBUG nova.network.neutron [req-5bf80928-806a-421d-a4be-0c0d7eb7abb4 req-8d0f92d6-697b-4be8-8475-0368e9400fb3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.076 2 DEBUG nova.network.neutron [req-5bf80928-806a-421d-a4be-0c0d7eb7abb4 req-8d0f92d6-697b-4be8-8475-0368e9400fb3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.123 2 DEBUG nova.compute.manager [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.125 2 DEBUG nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.125 2 INFO nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Creating image(s)
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.126 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "/var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.126 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "/var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.127 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "/var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.128 2 DEBUG oslo_utils.imageutils.format_inspector [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.132 2 DEBUG oslo_utils.imageutils.format_inspector [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.134 2 DEBUG oslo_concurrency.processutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.203 2 DEBUG oslo_concurrency.processutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.204 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.205 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.206 2 DEBUG oslo_utils.imageutils.format_inspector [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.212 2 DEBUG oslo_utils.imageutils.format_inspector [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.212 2 DEBUG oslo_concurrency.processutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.283 2 DEBUG oslo_concurrency.processutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.284 2 DEBUG oslo_concurrency.processutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.324 2 DEBUG oslo_concurrency.processutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk 1073741824" returned: 0 in 0.040s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.325 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.120s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.325 2 DEBUG oslo_concurrency.processutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.406 2 DEBUG oslo_concurrency.processutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.406 2 DEBUG nova.virt.disk.api [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Checking if we can resize image /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.407 2 DEBUG oslo_concurrency.processutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.466 2 DEBUG oslo_concurrency.processutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.467 2 DEBUG nova.virt.disk.api [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Cannot resize image /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.467 2 DEBUG nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.468 2 DEBUG nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Ensure instance console log exists: /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.468 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.469 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.469 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.582 2 DEBUG oslo_concurrency.lockutils [req-5bf80928-806a-421d-a4be-0c0d7eb7abb4 req-8d0f92d6-697b-4be8-8475-0368e9400fb3 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-036b9f54-5759-43e6-9666-ec39d96c1729" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.583 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquired lock "refresh_cache-036b9f54-5759-43e6-9666-ec39d96c1729" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:35:50 compute-0 nova_compute[117331]: 2025-10-09 16:35:50.583 2 DEBUG nova.network.neutron [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:35:51 compute-0 nova_compute[117331]: 2025-10-09 16:35:51.309 2 DEBUG nova.network.neutron [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:35:51 compute-0 nova_compute[117331]: 2025-10-09 16:35:51.519 2 WARNING neutronclient.v2_0.client [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:35:51 compute-0 nova_compute[117331]: 2025-10-09 16:35:51.651 2 DEBUG nova.network.neutron [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Updating instance_info_cache with network_info: [{"id": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "address": "fa:16:3e:60:6b:b1", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19fbb24b-75", "ovs_interfaceid": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.158 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Releasing lock "refresh_cache-036b9f54-5759-43e6-9666-ec39d96c1729" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.158 2 DEBUG nova.compute.manager [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Instance network_info: |[{"id": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "address": "fa:16:3e:60:6b:b1", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19fbb24b-75", "ovs_interfaceid": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.161 2 DEBUG nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Start _get_guest_xml network_info=[{"id": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "address": "fa:16:3e:60:6b:b1", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19fbb24b-75", "ovs_interfaceid": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.165 2 WARNING nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.166 2 DEBUG nova.virt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteWorkloadStabilizationStrategy-server-1046217673', uuid='036b9f54-5759-43e6-9666-ec39d96c1729'), owner=OwnerMeta(userid='685b4924c5a04af7ae6f4a328bb50f14', username='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin', projectid='a60acfe52e4b4b7f912654a59f0978b7', projectname='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "address": "fa:16:3e:60:6b:b1", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19fbb24b-75", "ovs_interfaceid": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760027752.1663537) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.170 2 DEBUG nova.virt.libvirt.host [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.170 2 DEBUG nova.virt.libvirt.host [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.173 2 DEBUG nova.virt.libvirt.host [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.174 2 DEBUG nova.virt.libvirt.host [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.175 2 DEBUG nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.175 2 DEBUG nova.virt.hardware [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.176 2 DEBUG nova.virt.hardware [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.176 2 DEBUG nova.virt.hardware [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.177 2 DEBUG nova.virt.hardware [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.177 2 DEBUG nova.virt.hardware [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.178 2 DEBUG nova.virt.hardware [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.178 2 DEBUG nova.virt.hardware [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.179 2 DEBUG nova.virt.hardware [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.179 2 DEBUG nova.virt.hardware [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.179 2 DEBUG nova.virt.hardware [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.179 2 DEBUG nova.virt.hardware [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.185 2 DEBUG nova.virt.libvirt.vif [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:35:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-1046217673',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-1046217',id=27,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a60acfe52e4b4b7f912654a59f0978b7',ramdisk_id='',reservation_id='r-r1q6w401',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader,manager',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:35:49Z,user_data=None,user_id='685b4924c5a04af7ae6f4a328bb50f14',uuid=036b9f54-5759-43e6-9666-ec39d96c1729,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "address": "fa:16:3e:60:6b:b1", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19fbb24b-75", "ovs_interfaceid": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.185 2 DEBUG nova.network.os_vif_util [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Converting VIF {"id": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "address": "fa:16:3e:60:6b:b1", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19fbb24b-75", "ovs_interfaceid": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.186 2 DEBUG nova.network.os_vif_util [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:6b:b1,bridge_name='br-int',has_traffic_filtering=True,id=19fbb24b-757a-4067-a3e1-7a2c326cb886,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19fbb24b-75') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.186 2 DEBUG nova.objects.instance [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 036b9f54-5759-43e6-9666-ec39d96c1729 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.694 2 DEBUG nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:35:52 compute-0 nova_compute[117331]:   <uuid>036b9f54-5759-43e6-9666-ec39d96c1729</uuid>
Oct 09 16:35:52 compute-0 nova_compute[117331]:   <name>instance-0000001b</name>
Oct 09 16:35:52 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:35:52 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:35:52 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadStabilizationStrategy-server-1046217673</nova:name>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:35:52</nova:creationTime>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:35:52 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:35:52 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:35:52 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:35:52 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:35:52 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:35:52 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:35:52 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:35:52 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:35:52 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:35:52 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:35:52 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:35:52 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:35:52 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:35:52 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:35:52 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:35:52 compute-0 nova_compute[117331]:         <nova:user uuid="685b4924c5a04af7ae6f4a328bb50f14">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin</nova:user>
Oct 09 16:35:52 compute-0 nova_compute[117331]:         <nova:project uuid="a60acfe52e4b4b7f912654a59f0978b7">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673</nova:project>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:35:52 compute-0 nova_compute[117331]:         <nova:port uuid="19fbb24b-757a-4067-a3e1-7a2c326cb886">
Oct 09 16:35:52 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:35:52 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:35:52 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <system>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <entry name="serial">036b9f54-5759-43e6-9666-ec39d96c1729</entry>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <entry name="uuid">036b9f54-5759-43e6-9666-ec39d96c1729</entry>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     </system>
Oct 09 16:35:52 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:35:52 compute-0 nova_compute[117331]:   <os>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:   </os>
Oct 09 16:35:52 compute-0 nova_compute[117331]:   <features>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:   </features>
Oct 09 16:35:52 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:35:52 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:35:52 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk.config"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:60:6b:b1"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <target dev="tap19fbb24b-75"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/console.log" append="off"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <video>
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     </video>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:35:52 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:35:52 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:35:52 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:35:52 compute-0 nova_compute[117331]: </domain>
Oct 09 16:35:52 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.694 2 DEBUG nova.compute.manager [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Preparing to wait for external event network-vif-plugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.695 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.695 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.695 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.696 2 DEBUG nova.virt.libvirt.vif [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:35:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-1046217673',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-1046217',id=27,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a60acfe52e4b4b7f912654a59f0978b7',ramdisk_id='',reservation_id='r-r1q6w401',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader,manager',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:35:49Z,user_data=None,user_id='685b4924c5a04af7ae6f4a328bb50f14',uuid=036b9f54-5759-43e6-9666-ec39d96c1729,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "address": "fa:16:3e:60:6b:b1", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19fbb24b-75", "ovs_interfaceid": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.697 2 DEBUG nova.network.os_vif_util [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Converting VIF {"id": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "address": "fa:16:3e:60:6b:b1", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19fbb24b-75", "ovs_interfaceid": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.698 2 DEBUG nova.network.os_vif_util [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:6b:b1,bridge_name='br-int',has_traffic_filtering=True,id=19fbb24b-757a-4067-a3e1-7a2c326cb886,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19fbb24b-75') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.698 2 DEBUG os_vif [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:6b:b1,bridge_name='br-int',has_traffic_filtering=True,id=19fbb24b-757a-4067-a3e1-7a2c326cb886,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19fbb24b-75') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.699 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.700 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.701 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'c184eed1-032b-5a7f-8410-04d1f33cb0ce', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.708 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19fbb24b-75, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.708 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap19fbb24b-75, col_values=(('qos', UUID('72834c90-9b8d-4951-a2a2-4f31fd6f1434')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.708 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap19fbb24b-75, col_values=(('external_ids', {'iface-id': '19fbb24b-757a-4067-a3e1-7a2c326cb886', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:6b:b1', 'vm-uuid': '036b9f54-5759-43e6-9666-ec39d96c1729'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:52 compute-0 NetworkManager[1028]: <info>  [1760027752.7106] manager: (tap19fbb24b-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:52 compute-0 nova_compute[117331]: 2025-10-09 16:35:52.716 2 INFO os_vif [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:6b:b1,bridge_name='br-int',has_traffic_filtering=True,id=19fbb24b-757a-4067-a3e1-7a2c326cb886,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19fbb24b-75')
Oct 09 16:35:54 compute-0 nova_compute[117331]: 2025-10-09 16:35:54.262 2 DEBUG nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:35:54 compute-0 nova_compute[117331]: 2025-10-09 16:35:54.263 2 DEBUG nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:35:54 compute-0 nova_compute[117331]: 2025-10-09 16:35:54.263 2 DEBUG nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] No VIF found with MAC fa:16:3e:60:6b:b1, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:35:54 compute-0 nova_compute[117331]: 2025-10-09 16:35:54.264 2 INFO nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Using config drive
Oct 09 16:35:54 compute-0 nova_compute[117331]: 2025-10-09 16:35:54.774 2 WARNING neutronclient.v2_0.client [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:35:55 compute-0 nova_compute[117331]: 2025-10-09 16:35:55.391 2 INFO nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Creating config drive at /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk.config
Oct 09 16:35:55 compute-0 nova_compute[117331]: 2025-10-09 16:35:55.397 2 DEBUG oslo_concurrency.processutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpuf_92xs6 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:35:55 compute-0 nova_compute[117331]: 2025-10-09 16:35:55.523 2 DEBUG oslo_concurrency.processutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpuf_92xs6" returned: 0 in 0.126s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:35:55 compute-0 kernel: tap19fbb24b-75: entered promiscuous mode
Oct 09 16:35:55 compute-0 NetworkManager[1028]: <info>  [1760027755.6015] manager: (tap19fbb24b-75): new Tun device (/org/freedesktop/NetworkManager/Devices/92)
Oct 09 16:35:55 compute-0 ovn_controller[19752]: 2025-10-09T16:35:55Z|00241|binding|INFO|Claiming lport 19fbb24b-757a-4067-a3e1-7a2c326cb886 for this chassis.
Oct 09 16:35:55 compute-0 ovn_controller[19752]: 2025-10-09T16:35:55Z|00242|binding|INFO|19fbb24b-757a-4067-a3e1-7a2c326cb886: Claiming fa:16:3e:60:6b:b1 10.100.0.4
Oct 09 16:35:55 compute-0 nova_compute[117331]: 2025-10-09 16:35:55.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:55.616 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:6b:b1 10.100.0.4'], port_security=['fa:16:3e:60:6b:b1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '036b9f54-5759-43e6-9666-ec39d96c1729', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a60acfe52e4b4b7f912654a59f0978b7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c47ae156-dc3a-4eb3-8eb2-126be8ba5497', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f19947fc-c2ef-4762-adfc-471bb24a038d, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=19fbb24b-757a-4067-a3e1-7a2c326cb886) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:35:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:55.617 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 19fbb24b-757a-4067-a3e1-7a2c326cb886 in datapath cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 bound to our chassis
Oct 09 16:35:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:55.619 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8
Oct 09 16:35:55 compute-0 ovn_controller[19752]: 2025-10-09T16:35:55Z|00243|binding|INFO|Setting lport 19fbb24b-757a-4067-a3e1-7a2c326cb886 ovn-installed in OVS
Oct 09 16:35:55 compute-0 ovn_controller[19752]: 2025-10-09T16:35:55Z|00244|binding|INFO|Setting lport 19fbb24b-757a-4067-a3e1-7a2c326cb886 up in Southbound
Oct 09 16:35:55 compute-0 nova_compute[117331]: 2025-10-09 16:35:55.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:55 compute-0 nova_compute[117331]: 2025-10-09 16:35:55.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:55.637 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[83091afe-e7a5-40ba-a557-4267e6743628]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:55 compute-0 systemd-udevd[152021]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:35:55 compute-0 systemd-machined[77487]: New machine qemu-21-instance-0000001b.
Oct 09 16:35:55 compute-0 NetworkManager[1028]: <info>  [1760027755.6583] device (tap19fbb24b-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:35:55 compute-0 NetworkManager[1028]: <info>  [1760027755.6590] device (tap19fbb24b-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:35:55 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-0000001b.
Oct 09 16:35:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:55.668 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[71d04165-5606-48cd-897e-2345408dabed]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:55.671 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd6c7c7-4351-48b4-b0ff-2a2b72bdfedd]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:55 compute-0 podman[151998]: 2025-10-09 16:35:55.688337322 +0000 UTC m=+0.092210250 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 09 16:35:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:55.697 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[ca9d5d21-34e1-4c33-8010-131abe379600]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:55.717 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[f3766f94-5874-4299-82a1-6f32f4c06855]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf3aa351-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:10:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 268036, 'reachable_time': 31043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 152034, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:55.736 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[4f4a74ba-fd18-4658-b5ca-70c592295e6b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcf3aa351-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 268047, 'tstamp': 268047}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 152038, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcf3aa351-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 268050, 'tstamp': 268050}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 152038, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:55.738 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf3aa351-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:35:55 compute-0 nova_compute[117331]: 2025-10-09 16:35:55.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:55 compute-0 nova_compute[117331]: 2025-10-09 16:35:55.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:55.741 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf3aa351-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:35:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:55.741 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:35:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:55.742 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf3aa351-40, col_values=(('external_ids', {'iface-id': '7d2fd2fc-650d-4abc-8268-a14a8cdfd51e'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:35:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:55.742 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:35:55 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:35:55.743 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef9bcce-907f-4606-b09c-5bae1fc5059b]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:35:55 compute-0 nova_compute[117331]: 2025-10-09 16:35:55.752 2 DEBUG nova.compute.manager [req-d22e4779-e6c7-426d-a0a4-ca49a04e3d89 req-6a099bc0-e0cd-409e-9f8b-0247f6e4434f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Received event network-vif-plugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:35:55 compute-0 nova_compute[117331]: 2025-10-09 16:35:55.752 2 DEBUG oslo_concurrency.lockutils [req-d22e4779-e6c7-426d-a0a4-ca49a04e3d89 req-6a099bc0-e0cd-409e-9f8b-0247f6e4434f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:35:55 compute-0 nova_compute[117331]: 2025-10-09 16:35:55.753 2 DEBUG oslo_concurrency.lockutils [req-d22e4779-e6c7-426d-a0a4-ca49a04e3d89 req-6a099bc0-e0cd-409e-9f8b-0247f6e4434f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:35:55 compute-0 nova_compute[117331]: 2025-10-09 16:35:55.753 2 DEBUG oslo_concurrency.lockutils [req-d22e4779-e6c7-426d-a0a4-ca49a04e3d89 req-6a099bc0-e0cd-409e-9f8b-0247f6e4434f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:35:55 compute-0 nova_compute[117331]: 2025-10-09 16:35:55.753 2 DEBUG nova.compute.manager [req-d22e4779-e6c7-426d-a0a4-ca49a04e3d89 req-6a099bc0-e0cd-409e-9f8b-0247f6e4434f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Processing event network-vif-plugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:35:56 compute-0 nova_compute[117331]: 2025-10-09 16:35:56.424 2 DEBUG nova.compute.manager [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:35:56 compute-0 nova_compute[117331]: 2025-10-09 16:35:56.427 2 DEBUG nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:35:56 compute-0 nova_compute[117331]: 2025-10-09 16:35:56.430 2 INFO nova.virt.libvirt.driver [-] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Instance spawned successfully.
Oct 09 16:35:56 compute-0 nova_compute[117331]: 2025-10-09 16:35:56.430 2 DEBUG nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:35:56 compute-0 nova_compute[117331]: 2025-10-09 16:35:56.992 2 DEBUG nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:35:56 compute-0 nova_compute[117331]: 2025-10-09 16:35:56.993 2 DEBUG nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:35:56 compute-0 nova_compute[117331]: 2025-10-09 16:35:56.993 2 DEBUG nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:35:56 compute-0 nova_compute[117331]: 2025-10-09 16:35:56.994 2 DEBUG nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:35:56 compute-0 nova_compute[117331]: 2025-10-09 16:35:56.994 2 DEBUG nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:35:56 compute-0 nova_compute[117331]: 2025-10-09 16:35:56.995 2 DEBUG nova.virt.libvirt.driver [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:35:57 compute-0 nova_compute[117331]: 2025-10-09 16:35:57.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:57 compute-0 nova_compute[117331]: 2025-10-09 16:35:57.566 2 INFO nova.compute.manager [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Took 7.44 seconds to spawn the instance on the hypervisor.
Oct 09 16:35:57 compute-0 nova_compute[117331]: 2025-10-09 16:35:57.566 2 DEBUG nova.compute.manager [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:35:57 compute-0 nova_compute[117331]: 2025-10-09 16:35:57.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:35:57 compute-0 nova_compute[117331]: 2025-10-09 16:35:57.802 2 DEBUG nova.compute.manager [req-70926679-50e1-41ef-9b08-44d1c02b643c req-6eea32b2-8acf-4fff-8306-9149d8131212 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Received event network-vif-plugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:35:57 compute-0 nova_compute[117331]: 2025-10-09 16:35:57.802 2 DEBUG oslo_concurrency.lockutils [req-70926679-50e1-41ef-9b08-44d1c02b643c req-6eea32b2-8acf-4fff-8306-9149d8131212 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:35:57 compute-0 nova_compute[117331]: 2025-10-09 16:35:57.802 2 DEBUG oslo_concurrency.lockutils [req-70926679-50e1-41ef-9b08-44d1c02b643c req-6eea32b2-8acf-4fff-8306-9149d8131212 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:35:57 compute-0 nova_compute[117331]: 2025-10-09 16:35:57.802 2 DEBUG oslo_concurrency.lockutils [req-70926679-50e1-41ef-9b08-44d1c02b643c req-6eea32b2-8acf-4fff-8306-9149d8131212 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:35:57 compute-0 nova_compute[117331]: 2025-10-09 16:35:57.803 2 DEBUG nova.compute.manager [req-70926679-50e1-41ef-9b08-44d1c02b643c req-6eea32b2-8acf-4fff-8306-9149d8131212 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] No waiting events found dispatching network-vif-plugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:35:57 compute-0 nova_compute[117331]: 2025-10-09 16:35:57.803 2 WARNING nova.compute.manager [req-70926679-50e1-41ef-9b08-44d1c02b643c req-6eea32b2-8acf-4fff-8306-9149d8131212 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Received unexpected event network-vif-plugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 for instance with vm_state active and task_state None.
Oct 09 16:35:58 compute-0 nova_compute[117331]: 2025-10-09 16:35:58.096 2 INFO nova.compute.manager [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Took 12.64 seconds to build instance.
Oct 09 16:35:58 compute-0 nova_compute[117331]: 2025-10-09 16:35:58.604 2 DEBUG oslo_concurrency.lockutils [None req-32474ca4-854b-44fb-9ea1-9f55ddd1f618 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.174s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:35:59 compute-0 podman[127775]: time="2025-10-09T16:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:35:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:35:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3491 "" "Go-http-client/1.1"
Oct 09 16:35:59 compute-0 podman[152047]: 2025-10-09 16:35:59.840294779 +0000 UTC m=+0.064404430 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:36:01 compute-0 openstack_network_exporter[129925]: ERROR   16:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:36:01 compute-0 openstack_network_exporter[129925]: ERROR   16:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:36:01 compute-0 openstack_network_exporter[129925]: ERROR   16:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:36:01 compute-0 openstack_network_exporter[129925]: ERROR   16:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:36:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:36:01 compute-0 openstack_network_exporter[129925]: ERROR   16:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:36:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:36:02 compute-0 nova_compute[117331]: 2025-10-09 16:36:02.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:02 compute-0 nova_compute[117331]: 2025-10-09 16:36:02.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:05 compute-0 podman[152073]: 2025-10-09 16:36:05.851339387 +0000 UTC m=+0.067779827 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:36:05 compute-0 podman[152072]: 2025-10-09 16:36:05.853248757 +0000 UTC m=+0.085134566 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 09 16:36:07 compute-0 nova_compute[117331]: 2025-10-09 16:36:07.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:07 compute-0 nova_compute[117331]: 2025-10-09 16:36:07.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:08 compute-0 ovn_controller[19752]: 2025-10-09T16:36:08Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:60:6b:b1 10.100.0.4
Oct 09 16:36:08 compute-0 ovn_controller[19752]: 2025-10-09T16:36:08Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:60:6b:b1 10.100.0.4
Oct 09 16:36:12 compute-0 nova_compute[117331]: 2025-10-09 16:36:12.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:12 compute-0 nova_compute[117331]: 2025-10-09 16:36:12.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:14 compute-0 podman[152119]: 2025-10-09 16:36:14.84376186 +0000 UTC m=+0.076713691 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9)
Oct 09 16:36:14 compute-0 podman[152120]: 2025-10-09 16:36:14.911241636 +0000 UTC m=+0.130512414 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.4)
Oct 09 16:36:17 compute-0 nova_compute[117331]: 2025-10-09 16:36:17.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:17 compute-0 nova_compute[117331]: 2025-10-09 16:36:17.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:22 compute-0 nova_compute[117331]: 2025-10-09 16:36:22.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:22 compute-0 nova_compute[117331]: 2025-10-09 16:36:22.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:25 compute-0 ovn_controller[19752]: 2025-10-09T16:36:25Z|00245|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Oct 09 16:36:25 compute-0 nova_compute[117331]: 2025-10-09 16:36:25.672 2 DEBUG nova.virt.libvirt.driver [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Check if temp file /var/lib/nova/instances/tmpw_dmwcnj exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Oct 09 16:36:25 compute-0 nova_compute[117331]: 2025-10-09 16:36:25.676 2 DEBUG nova.compute.manager [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpw_dmwcnj',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='036b9f54-5759-43e6-9666-ec39d96c1729',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Oct 09 16:36:25 compute-0 nova_compute[117331]: 2025-10-09 16:36:25.723 2 DEBUG nova.virt.libvirt.driver [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Check if temp file /var/lib/nova/instances/tmpfhqgjh_d exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Oct 09 16:36:25 compute-0 nova_compute[117331]: 2025-10-09 16:36:25.726 2 DEBUG nova.compute.manager [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpfhqgjh_d',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='85c9d87b-0e28-425f-b54e-c14066ba6918',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Oct 09 16:36:25 compute-0 podman[152166]: 2025-10-09 16:36:25.813919754 +0000 UTC m=+0.052220484 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Oct 09 16:36:27 compute-0 nova_compute[117331]: 2025-10-09 16:36:27.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:27 compute-0 nova_compute[117331]: 2025-10-09 16:36:27.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:29 compute-0 podman[127775]: time="2025-10-09T16:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:36:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:36:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3492 "" "Go-http-client/1.1"
Oct 09 16:36:30 compute-0 nova_compute[117331]: 2025-10-09 16:36:30.146 2 DEBUG oslo_concurrency.processutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:36:30 compute-0 nova_compute[117331]: 2025-10-09 16:36:30.217 2 DEBUG oslo_concurrency.processutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:36:30 compute-0 nova_compute[117331]: 2025-10-09 16:36:30.217 2 DEBUG oslo_concurrency.processutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:36:30 compute-0 nova_compute[117331]: 2025-10-09 16:36:30.278 2 DEBUG oslo_concurrency.processutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:36:30 compute-0 nova_compute[117331]: 2025-10-09 16:36:30.280 2 DEBUG nova.compute.manager [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Preparing to wait for external event network-vif-plugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:36:30 compute-0 nova_compute[117331]: 2025-10-09 16:36:30.280 2 DEBUG oslo_concurrency.lockutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:36:30 compute-0 nova_compute[117331]: 2025-10-09 16:36:30.280 2 DEBUG oslo_concurrency.lockutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:36:30 compute-0 nova_compute[117331]: 2025-10-09 16:36:30.281 2 DEBUG oslo_concurrency.lockutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:36:30 compute-0 nova_compute[117331]: 2025-10-09 16:36:30.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:36:30 compute-0 nova_compute[117331]: 2025-10-09 16:36:30.308 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:36:30 compute-0 podman[152193]: 2025-10-09 16:36:30.850060245 +0000 UTC m=+0.076388029 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 09 16:36:31 compute-0 openstack_network_exporter[129925]: ERROR   16:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:36:31 compute-0 openstack_network_exporter[129925]: ERROR   16:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:36:31 compute-0 openstack_network_exporter[129925]: ERROR   16:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:36:31 compute-0 openstack_network_exporter[129925]: ERROR   16:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:36:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:36:31 compute-0 openstack_network_exporter[129925]: ERROR   16:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:36:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:36:32 compute-0 nova_compute[117331]: 2025-10-09 16:36:32.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:32 compute-0 nova_compute[117331]: 2025-10-09 16:36:32.304 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:36:32 compute-0 nova_compute[117331]: 2025-10-09 16:36:32.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:36:32 compute-0 nova_compute[117331]: 2025-10-09 16:36:32.827 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:36:32 compute-0 nova_compute[117331]: 2025-10-09 16:36:32.828 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:36:32 compute-0 nova_compute[117331]: 2025-10-09 16:36:32.828 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:36:32 compute-0 nova_compute[117331]: 2025-10-09 16:36:32.828 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:36:32 compute-0 nova_compute[117331]: 2025-10-09 16:36:32.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:33 compute-0 nova_compute[117331]: 2025-10-09 16:36:33.874 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:36:33 compute-0 nova_compute[117331]: 2025-10-09 16:36:33.927 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:36:33 compute-0 nova_compute[117331]: 2025-10-09 16:36:33.928 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:36:34 compute-0 nova_compute[117331]: 2025-10-09 16:36:33.999 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:36:34 compute-0 nova_compute[117331]: 2025-10-09 16:36:34.010 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:36:34 compute-0 nova_compute[117331]: 2025-10-09 16:36:34.080 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:36:34 compute-0 nova_compute[117331]: 2025-10-09 16:36:34.081 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:36:34 compute-0 nova_compute[117331]: 2025-10-09 16:36:34.163 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:36:34 compute-0 nova_compute[117331]: 2025-10-09 16:36:34.298 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:36:34 compute-0 nova_compute[117331]: 2025-10-09 16:36:34.299 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:36:34 compute-0 nova_compute[117331]: 2025-10-09 16:36:34.314 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.015s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:36:34 compute-0 nova_compute[117331]: 2025-10-09 16:36:34.315 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5797MB free_disk=73.19185638427734GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:36:34 compute-0 nova_compute[117331]: 2025-10-09 16:36:34.315 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:36:34 compute-0 nova_compute[117331]: 2025-10-09 16:36:34.315 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:36:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:35.332 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:36:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:35.333 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:36:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:35.333 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:36:35 compute-0 nova_compute[117331]: 2025-10-09 16:36:35.336 2 INFO nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Updating resource usage from migration 5886d8ac-4e0a-475a-946f-65da225e1ab5
Oct 09 16:36:35 compute-0 nova_compute[117331]: 2025-10-09 16:36:35.337 2 INFO nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Updating resource usage from migration 1ddf8509-0504-43ef-bcb0-1ae3ece7dd4d
Oct 09 16:36:35 compute-0 nova_compute[117331]: 2025-10-09 16:36:35.392 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Migration 5886d8ac-4e0a-475a-946f-65da225e1ab5 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:36:35 compute-0 nova_compute[117331]: 2025-10-09 16:36:35.392 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Migration 1ddf8509-0504-43ef-bcb0-1ae3ece7dd4d is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:36:35 compute-0 nova_compute[117331]: 2025-10-09 16:36:35.393 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:36:35 compute-0 nova_compute[117331]: 2025-10-09 16:36:35.393 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:36:34 up 45 min,  0 user,  load average: 0.55, 0.57, 0.49\n', 'num_instances': '2', 'num_vm_active': '2', 'num_task_migrating': '2', 'num_os_type_None': '2', 'num_proj_a60acfe52e4b4b7f912654a59f0978b7': '2', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:36:35 compute-0 nova_compute[117331]: 2025-10-09 16:36:35.435 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing inventories for resource provider 593051b8-2000-437f-a915-2616fc8b1671 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 09 16:36:35 compute-0 nova_compute[117331]: 2025-10-09 16:36:35.472 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating ProviderTree inventory for provider 593051b8-2000-437f-a915-2616fc8b1671 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 09 16:36:35 compute-0 nova_compute[117331]: 2025-10-09 16:36:35.473 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating inventory in ProviderTree for provider 593051b8-2000-437f-a915-2616fc8b1671 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 09 16:36:35 compute-0 nova_compute[117331]: 2025-10-09 16:36:35.488 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing aggregate associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 09 16:36:35 compute-0 nova_compute[117331]: 2025-10-09 16:36:35.561 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing trait associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, traits: HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ARCH_X86_64,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_TIS,HW_ARCH_X86_64,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOUND_MODEL_AC97,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_RESCUE_BFV,COMPUTE_SOUND_MODEL_SB16,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 09 16:36:35 compute-0 nova_compute[117331]: 2025-10-09 16:36:35.624 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:36:36 compute-0 nova_compute[117331]: 2025-10-09 16:36:36.130 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:36:36 compute-0 nova_compute[117331]: 2025-10-09 16:36:36.644 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:36:36 compute-0 nova_compute[117331]: 2025-10-09 16:36:36.645 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.329s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:36:36 compute-0 nova_compute[117331]: 2025-10-09 16:36:36.697 2 DEBUG nova.compute.manager [req-f11763af-69b6-444d-820a-b62221ca1fdb req-ba4eea80-743b-4af9-b110-d883d4e2a8fb ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Received event network-vif-unplugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:36:36 compute-0 nova_compute[117331]: 2025-10-09 16:36:36.697 2 DEBUG oslo_concurrency.lockutils [req-f11763af-69b6-444d-820a-b62221ca1fdb req-ba4eea80-743b-4af9-b110-d883d4e2a8fb ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:36:36 compute-0 nova_compute[117331]: 2025-10-09 16:36:36.697 2 DEBUG oslo_concurrency.lockutils [req-f11763af-69b6-444d-820a-b62221ca1fdb req-ba4eea80-743b-4af9-b110-d883d4e2a8fb ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:36:36 compute-0 nova_compute[117331]: 2025-10-09 16:36:36.698 2 DEBUG oslo_concurrency.lockutils [req-f11763af-69b6-444d-820a-b62221ca1fdb req-ba4eea80-743b-4af9-b110-d883d4e2a8fb ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:36:36 compute-0 nova_compute[117331]: 2025-10-09 16:36:36.698 2 DEBUG nova.compute.manager [req-f11763af-69b6-444d-820a-b62221ca1fdb req-ba4eea80-743b-4af9-b110-d883d4e2a8fb ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] No event matching network-vif-unplugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 in dict_keys([('network-vif-plugged', '19fbb24b-757a-4067-a3e1-7a2c326cb886')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Oct 09 16:36:36 compute-0 nova_compute[117331]: 2025-10-09 16:36:36.698 2 DEBUG nova.compute.manager [req-f11763af-69b6-444d-820a-b62221ca1fdb req-ba4eea80-743b-4af9-b110-d883d4e2a8fb ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Received event network-vif-unplugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:36:36 compute-0 podman[152232]: 2025-10-09 16:36:36.829094889 +0000 UTC m=+0.062302644 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Oct 09 16:36:36 compute-0 podman[152233]: 2025-10-09 16:36:36.830777932 +0000 UTC m=+0.054745144 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=iscsid, org.label-schema.build-date=20251007, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:36:37 compute-0 nova_compute[117331]: 2025-10-09 16:36:37.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:37 compute-0 nova_compute[117331]: 2025-10-09 16:36:37.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:38 compute-0 nova_compute[117331]: 2025-10-09 16:36:38.645 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:36:38 compute-0 nova_compute[117331]: 2025-10-09 16:36:38.646 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:36:38 compute-0 nova_compute[117331]: 2025-10-09 16:36:38.646 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:36:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:38.667 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:36:38 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:38.667 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:36:38 compute-0 nova_compute[117331]: 2025-10-09 16:36:38.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:38 compute-0 nova_compute[117331]: 2025-10-09 16:36:38.803 2 DEBUG nova.compute.manager [req-a73f1d40-9dd2-46b8-84f2-06916510a745 req-9fab97ad-b4bd-47b7-88d1-bafed74596ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Received event network-vif-plugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:36:38 compute-0 nova_compute[117331]: 2025-10-09 16:36:38.803 2 DEBUG oslo_concurrency.lockutils [req-a73f1d40-9dd2-46b8-84f2-06916510a745 req-9fab97ad-b4bd-47b7-88d1-bafed74596ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:36:38 compute-0 nova_compute[117331]: 2025-10-09 16:36:38.804 2 DEBUG oslo_concurrency.lockutils [req-a73f1d40-9dd2-46b8-84f2-06916510a745 req-9fab97ad-b4bd-47b7-88d1-bafed74596ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:36:38 compute-0 nova_compute[117331]: 2025-10-09 16:36:38.804 2 DEBUG oslo_concurrency.lockutils [req-a73f1d40-9dd2-46b8-84f2-06916510a745 req-9fab97ad-b4bd-47b7-88d1-bafed74596ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:36:38 compute-0 nova_compute[117331]: 2025-10-09 16:36:38.804 2 DEBUG nova.compute.manager [req-a73f1d40-9dd2-46b8-84f2-06916510a745 req-9fab97ad-b4bd-47b7-88d1-bafed74596ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Processing event network-vif-plugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:36:38 compute-0 nova_compute[117331]: 2025-10-09 16:36:38.804 2 DEBUG nova.compute.manager [req-a73f1d40-9dd2-46b8-84f2-06916510a745 req-9fab97ad-b4bd-47b7-88d1-bafed74596ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Received event network-changed-19fbb24b-757a-4067-a3e1-7a2c326cb886 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:36:38 compute-0 nova_compute[117331]: 2025-10-09 16:36:38.805 2 DEBUG nova.compute.manager [req-a73f1d40-9dd2-46b8-84f2-06916510a745 req-9fab97ad-b4bd-47b7-88d1-bafed74596ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Refreshing instance network info cache due to event network-changed-19fbb24b-757a-4067-a3e1-7a2c326cb886. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:36:38 compute-0 nova_compute[117331]: 2025-10-09 16:36:38.805 2 DEBUG oslo_concurrency.lockutils [req-a73f1d40-9dd2-46b8-84f2-06916510a745 req-9fab97ad-b4bd-47b7-88d1-bafed74596ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-036b9f54-5759-43e6-9666-ec39d96c1729" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:36:38 compute-0 nova_compute[117331]: 2025-10-09 16:36:38.805 2 DEBUG oslo_concurrency.lockutils [req-a73f1d40-9dd2-46b8-84f2-06916510a745 req-9fab97ad-b4bd-47b7-88d1-bafed74596ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-036b9f54-5759-43e6-9666-ec39d96c1729" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:36:38 compute-0 nova_compute[117331]: 2025-10-09 16:36:38.806 2 DEBUG nova.network.neutron [req-a73f1d40-9dd2-46b8-84f2-06916510a745 req-9fab97ad-b4bd-47b7-88d1-bafed74596ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Refreshing network info cache for port 19fbb24b-757a-4067-a3e1-7a2c326cb886 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:36:39 compute-0 nova_compute[117331]: 2025-10-09 16:36:39.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:36:39 compute-0 nova_compute[117331]: 2025-10-09 16:36:39.322 2 WARNING neutronclient.v2_0.client [req-a73f1d40-9dd2-46b8-84f2-06916510a745 req-9fab97ad-b4bd-47b7-88d1-bafed74596ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:36:39 compute-0 nova_compute[117331]: 2025-10-09 16:36:39.809 2 INFO nova.compute.manager [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Took 9.53 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Oct 09 16:36:39 compute-0 nova_compute[117331]: 2025-10-09 16:36:39.810 2 DEBUG nova.compute.manager [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:36:40 compute-0 nova_compute[117331]: 2025-10-09 16:36:40.303 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:36:40 compute-0 nova_compute[117331]: 2025-10-09 16:36:40.340 2 WARNING neutronclient.v2_0.client [req-a73f1d40-9dd2-46b8-84f2-06916510a745 req-9fab97ad-b4bd-47b7-88d1-bafed74596ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:36:40 compute-0 nova_compute[117331]: 2025-10-09 16:36:40.354 2 DEBUG nova.compute.manager [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpw_dmwcnj',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='036b9f54-5759-43e6-9666-ec39d96c1729',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(5886d8ac-4e0a-475a-946f-65da225e1ab5),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Oct 09 16:36:40 compute-0 nova_compute[117331]: 2025-10-09 16:36:40.540 2 DEBUG nova.network.neutron [req-a73f1d40-9dd2-46b8-84f2-06916510a745 req-9fab97ad-b4bd-47b7-88d1-bafed74596ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Updated VIF entry in instance network info cache for port 19fbb24b-757a-4067-a3e1-7a2c326cb886. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Oct 09 16:36:40 compute-0 nova_compute[117331]: 2025-10-09 16:36:40.541 2 DEBUG nova.network.neutron [req-a73f1d40-9dd2-46b8-84f2-06916510a745 req-9fab97ad-b4bd-47b7-88d1-bafed74596ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Updating instance_info_cache with network_info: [{"id": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "address": "fa:16:3e:60:6b:b1", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19fbb24b-75", "ovs_interfaceid": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:36:40 compute-0 nova_compute[117331]: 2025-10-09 16:36:40.838 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:36:40 compute-0 nova_compute[117331]: 2025-10-09 16:36:40.928 2 DEBUG nova.objects.instance [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'migration_context' on Instance uuid 036b9f54-5759-43e6-9666-ec39d96c1729 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:36:40 compute-0 nova_compute[117331]: 2025-10-09 16:36:40.929 2 DEBUG nova.virt.libvirt.driver [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Oct 09 16:36:40 compute-0 nova_compute[117331]: 2025-10-09 16:36:40.931 2 DEBUG nova.virt.libvirt.driver [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:36:40 compute-0 nova_compute[117331]: 2025-10-09 16:36:40.931 2 DEBUG nova.virt.libvirt.driver [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:36:41 compute-0 nova_compute[117331]: 2025-10-09 16:36:41.121 2 DEBUG oslo_concurrency.lockutils [req-a73f1d40-9dd2-46b8-84f2-06916510a745 req-9fab97ad-b4bd-47b7-88d1-bafed74596ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-036b9f54-5759-43e6-9666-ec39d96c1729" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:36:41 compute-0 nova_compute[117331]: 2025-10-09 16:36:41.433 2 DEBUG nova.virt.libvirt.driver [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:36:41 compute-0 nova_compute[117331]: 2025-10-09 16:36:41.434 2 DEBUG nova.virt.libvirt.driver [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:36:41 compute-0 nova_compute[117331]: 2025-10-09 16:36:41.466 2 DEBUG nova.virt.libvirt.vif [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:35:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-1046217673',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-1046217',id=27,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:35:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a60acfe52e4b4b7f912654a59f0978b7',ramdisk_id='',reservation_id='r-r1q6w401',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader,manager',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:35:57Z,user_data=None,user_id='685b4924c5a04af7ae6f4a328bb50f14',uuid=036b9f54-5759-43e6-9666-ec39d96c1729,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "address": "fa:16:3e:60:6b:b1", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap19fbb24b-75", "ovs_interfaceid": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:36:41 compute-0 nova_compute[117331]: 2025-10-09 16:36:41.467 2 DEBUG nova.network.os_vif_util [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "address": "fa:16:3e:60:6b:b1", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap19fbb24b-75", "ovs_interfaceid": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:36:41 compute-0 nova_compute[117331]: 2025-10-09 16:36:41.468 2 DEBUG nova.network.os_vif_util [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:6b:b1,bridge_name='br-int',has_traffic_filtering=True,id=19fbb24b-757a-4067-a3e1-7a2c326cb886,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19fbb24b-75') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:36:41 compute-0 nova_compute[117331]: 2025-10-09 16:36:41.468 2 DEBUG nova.virt.libvirt.migration [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Updating guest XML with vif config: <interface type="ethernet">
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <mac address="fa:16:3e:60:6b:b1"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <model type="virtio"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <mtu size="1442"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <target dev="tap19fbb24b-75"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]: </interface>
Oct 09 16:36:41 compute-0 nova_compute[117331]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Oct 09 16:36:41 compute-0 nova_compute[117331]: 2025-10-09 16:36:41.469 2 DEBUG nova.virt.libvirt.migration [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <name>instance-0000001b</name>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <uuid>036b9f54-5759-43e6-9666-ec39d96c1729</uuid>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadStabilizationStrategy-server-1046217673</nova:name>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:35:52</nova:creationTime>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:36:41 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:36:41 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:user uuid="685b4924c5a04af7ae6f4a328bb50f14">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin</nova:user>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:project uuid="a60acfe52e4b4b7f912654a59f0978b7">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673</nova:project>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:port uuid="19fbb24b-757a-4067-a3e1-7a2c326cb886">
Oct 09 16:36:41 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <system>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <entry name="serial">036b9f54-5759-43e6-9666-ec39d96c1729</entry>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <entry name="uuid">036b9f54-5759-43e6-9666-ec39d96c1729</entry>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </system>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <os>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </os>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <features>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </features>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk.config"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:60:6b:b1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap19fbb24b-75"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/console.log" append="off"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       </target>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/console.log" append="off"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </console>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </input>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <video>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </video>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]: </domain>
Oct 09 16:36:41 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Oct 09 16:36:41 compute-0 nova_compute[117331]: 2025-10-09 16:36:41.470 2 DEBUG nova.virt.libvirt.migration [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <name>instance-0000001b</name>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <uuid>036b9f54-5759-43e6-9666-ec39d96c1729</uuid>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadStabilizationStrategy-server-1046217673</nova:name>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:35:52</nova:creationTime>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:36:41 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:36:41 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:user uuid="685b4924c5a04af7ae6f4a328bb50f14">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin</nova:user>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:project uuid="a60acfe52e4b4b7f912654a59f0978b7">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673</nova:project>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:port uuid="19fbb24b-757a-4067-a3e1-7a2c326cb886">
Oct 09 16:36:41 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <system>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <entry name="serial">036b9f54-5759-43e6-9666-ec39d96c1729</entry>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <entry name="uuid">036b9f54-5759-43e6-9666-ec39d96c1729</entry>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </system>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <os>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </os>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <features>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </features>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk.config"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:60:6b:b1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap19fbb24b-75"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/console.log" append="off"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       </target>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/console.log" append="off"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </console>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </input>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <video>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </video>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]: </domain>
Oct 09 16:36:41 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Oct 09 16:36:41 compute-0 nova_compute[117331]: 2025-10-09 16:36:41.471 2 DEBUG nova.virt.libvirt.migration [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _update_pci_xml output xml=<domain type="kvm">
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <name>instance-0000001b</name>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <uuid>036b9f54-5759-43e6-9666-ec39d96c1729</uuid>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadStabilizationStrategy-server-1046217673</nova:name>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:35:52</nova:creationTime>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:36:41 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:36:41 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:user uuid="685b4924c5a04af7ae6f4a328bb50f14">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin</nova:user>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:project uuid="a60acfe52e4b4b7f912654a59f0978b7">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673</nova:project>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <nova:port uuid="19fbb24b-757a-4067-a3e1-7a2c326cb886">
Oct 09 16:36:41 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <system>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <entry name="serial">036b9f54-5759-43e6-9666-ec39d96c1729</entry>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <entry name="uuid">036b9f54-5759-43e6-9666-ec39d96c1729</entry>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </system>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <os>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </os>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <features>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </features>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/disk.config"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:60:6b:b1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap19fbb24b-75"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/console.log" append="off"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:36:41 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       </target>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729/console.log" append="off"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </console>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </input>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <video>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </video>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:36:41 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:36:41 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:36:41 compute-0 nova_compute[117331]: </domain>
Oct 09 16:36:41 compute-0 nova_compute[117331]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Oct 09 16:36:41 compute-0 nova_compute[117331]: 2025-10-09 16:36:41.471 2 DEBUG nova.virt.libvirt.driver [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Oct 09 16:36:41 compute-0 nova_compute[117331]: 2025-10-09 16:36:41.936 2 DEBUG nova.virt.libvirt.migration [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Current None elapsed 1 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:36:41 compute-0 nova_compute[117331]: 2025-10-09 16:36:41.937 2 INFO nova.virt.libvirt.migration [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Increasing downtime to 50 ms after 0 sec elapsed time
Oct 09 16:36:42 compute-0 nova_compute[117331]: 2025-10-09 16:36:42.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:42 compute-0 nova_compute[117331]: 2025-10-09 16:36:42.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:42 compute-0 nova_compute[117331]: 2025-10-09 16:36:42.972 2 INFO nova.virt.libvirt.driver [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Oct 09 16:36:43 compute-0 nova_compute[117331]: 2025-10-09 16:36:43.476 2 DEBUG nova.virt.libvirt.migration [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Current 50 elapsed 2 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:36:43 compute-0 nova_compute[117331]: 2025-10-09 16:36:43.478 2 DEBUG nova.virt.libvirt.migration [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Oct 09 16:36:44 compute-0 kernel: tap19fbb24b-75 (unregistering): left promiscuous mode
Oct 09 16:36:44 compute-0 NetworkManager[1028]: <info>  [1760027804.0108] device (tap19fbb24b-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:36:44 compute-0 ovn_controller[19752]: 2025-10-09T16:36:44Z|00246|binding|INFO|Releasing lport 19fbb24b-757a-4067-a3e1-7a2c326cb886 from this chassis (sb_readonly=0)
Oct 09 16:36:44 compute-0 ovn_controller[19752]: 2025-10-09T16:36:44Z|00247|binding|INFO|Setting lport 19fbb24b-757a-4067-a3e1-7a2c326cb886 down in Southbound
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:44 compute-0 ovn_controller[19752]: 2025-10-09T16:36:44Z|00248|binding|INFO|Removing iface tap19fbb24b-75 ovn-installed in OVS
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:44.035 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:6b:b1 10.100.0.4'], port_security=['fa:16:3e:60:6b:b1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '2bd8bf21-1f6b-42c9-9656-9a72fa8dcbf3'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '036b9f54-5759-43e6-9666-ec39d96c1729', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a60acfe52e4b4b7f912654a59f0978b7', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'c47ae156-dc3a-4eb3-8eb2-126be8ba5497', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f19947fc-c2ef-4762-adfc-471bb24a038d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=19fbb24b-757a-4067-a3e1-7a2c326cb886) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:36:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:44.036 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 19fbb24b-757a-4067-a3e1-7a2c326cb886 in datapath cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 unbound from our chassis
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:44.038 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8
Oct 09 16:36:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:44.065 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[39e06c25-ac3a-4afd-ae90-901ad97e2a65]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:36:44 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Oct 09 16:36:44 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000001b.scope: Consumed 14.027s CPU time.
Oct 09 16:36:44 compute-0 systemd-machined[77487]: Machine qemu-21-instance-0000001b terminated.
Oct 09 16:36:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:44.106 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[7b657692-9cec-4009-b5d1-9794f23a7799]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:36:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:44.110 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[89879068-37d4-4a6b-baf1-5863c40ec5c5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:36:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:44.152 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[917f50ab-0045-402f-9953-fbaab047f810]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:36:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:44.174 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[2d8a4f99-904f-4f86-a9b0-979e7f5eb150]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf3aa351-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:10:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 268036, 'reachable_time': 31043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 152292, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:36:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:44.194 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[30cdb803-55ac-466a-91d0-617b795762b8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcf3aa351-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 268047, 'tstamp': 268047}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 152293, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcf3aa351-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 268050, 'tstamp': 268050}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 152293, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:36:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:44.196 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf3aa351-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:44 compute-0 systemd-udevd[152284]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:36:44 compute-0 NetworkManager[1028]: <info>  [1760027804.2038] manager: (tap19fbb24b-75): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:44.207 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf3aa351-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:36:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:44.207 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:36:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:44.208 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf3aa351-40, col_values=(('external_ids', {'iface-id': '7d2fd2fc-650d-4abc-8268-a14a8cdfd51e'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:36:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:44.208 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:36:44 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:44.209 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[884f39a8-7b8a-47b7-915f-aec6e9707cbd]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.247 2 DEBUG nova.virt.libvirt.guest [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.248 2 INFO nova.virt.libvirt.driver [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Migration operation has completed
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.248 2 INFO nova.compute.manager [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] _post_live_migration() is started..
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.249 2 DEBUG nova.virt.libvirt.driver [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.250 2 DEBUG nova.virt.libvirt.driver [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.250 2 DEBUG nova.virt.libvirt.driver [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.276 2 WARNING neutronclient.v2_0.client [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.276 2 WARNING neutronclient.v2_0.client [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.532 2 DEBUG nova.compute.manager [req-15ddace5-cd0f-4e63-b080-02d7ee144b28 req-cf65a012-09a4-493e-b6cd-e84a646f6cdb ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Received event network-vif-unplugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.533 2 DEBUG oslo_concurrency.lockutils [req-15ddace5-cd0f-4e63-b080-02d7ee144b28 req-cf65a012-09a4-493e-b6cd-e84a646f6cdb ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.533 2 DEBUG oslo_concurrency.lockutils [req-15ddace5-cd0f-4e63-b080-02d7ee144b28 req-cf65a012-09a4-493e-b6cd-e84a646f6cdb ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.534 2 DEBUG oslo_concurrency.lockutils [req-15ddace5-cd0f-4e63-b080-02d7ee144b28 req-cf65a012-09a4-493e-b6cd-e84a646f6cdb ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.534 2 DEBUG nova.compute.manager [req-15ddace5-cd0f-4e63-b080-02d7ee144b28 req-cf65a012-09a4-493e-b6cd-e84a646f6cdb ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] No waiting events found dispatching network-vif-unplugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.534 2 DEBUG nova.compute.manager [req-15ddace5-cd0f-4e63-b080-02d7ee144b28 req-cf65a012-09a4-493e-b6cd-e84a646f6cdb ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Received event network-vif-unplugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.955 2 DEBUG nova.network.neutron [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Activated binding for port 19fbb24b-757a-4067-a3e1-7a2c326cb886 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.956 2 DEBUG nova.compute.manager [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "address": "fa:16:3e:60:6b:b1", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19fbb24b-75", "ovs_interfaceid": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.958 2 DEBUG nova.virt.libvirt.vif [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:35:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-1046217673',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-1046217',id=27,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:35:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a60acfe52e4b4b7f912654a59f0978b7',ramdisk_id='',reservation_id='r-r1q6w401',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader,manager',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:36:20Z,user_data=None,user_id='685b4924c5a04af7ae6f4a328bb50f14',uuid=036b9f54-5759-43e6-9666-ec39d96c1729,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "address": "fa:16:3e:60:6b:b1", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19fbb24b-75", "ovs_interfaceid": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.958 2 DEBUG nova.network.os_vif_util [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "address": "fa:16:3e:60:6b:b1", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19fbb24b-75", "ovs_interfaceid": "19fbb24b-757a-4067-a3e1-7a2c326cb886", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.959 2 DEBUG nova.network.os_vif_util [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:6b:b1,bridge_name='br-int',has_traffic_filtering=True,id=19fbb24b-757a-4067-a3e1-7a2c326cb886,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19fbb24b-75') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.960 2 DEBUG os_vif [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:6b:b1,bridge_name='br-int',has_traffic_filtering=True,id=19fbb24b-757a-4067-a3e1-7a2c326cb886,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19fbb24b-75') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:44 compute-0 nova_compute[117331]: 2025-10-09 16:36:44.963 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19fbb24b-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:36:45 compute-0 nova_compute[117331]: 2025-10-09 16:36:45.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:45 compute-0 nova_compute[117331]: 2025-10-09 16:36:45.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:45 compute-0 nova_compute[117331]: 2025-10-09 16:36:45.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:45 compute-0 nova_compute[117331]: 2025-10-09 16:36:45.013 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=72834c90-9b8d-4951-a2a2-4f31fd6f1434) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:36:45 compute-0 nova_compute[117331]: 2025-10-09 16:36:45.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:45 compute-0 nova_compute[117331]: 2025-10-09 16:36:45.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:45 compute-0 nova_compute[117331]: 2025-10-09 16:36:45.017 2 INFO os_vif [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:6b:b1,bridge_name='br-int',has_traffic_filtering=True,id=19fbb24b-757a-4067-a3e1-7a2c326cb886,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19fbb24b-75')
Oct 09 16:36:45 compute-0 nova_compute[117331]: 2025-10-09 16:36:45.017 2 DEBUG oslo_concurrency.lockutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:36:45 compute-0 nova_compute[117331]: 2025-10-09 16:36:45.018 2 DEBUG oslo_concurrency.lockutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:36:45 compute-0 nova_compute[117331]: 2025-10-09 16:36:45.018 2 DEBUG oslo_concurrency.lockutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:36:45 compute-0 nova_compute[117331]: 2025-10-09 16:36:45.018 2 DEBUG nova.compute.manager [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Oct 09 16:36:45 compute-0 nova_compute[117331]: 2025-10-09 16:36:45.019 2 INFO nova.virt.libvirt.driver [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Deleting instance files /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729_del
Oct 09 16:36:45 compute-0 nova_compute[117331]: 2025-10-09 16:36:45.019 2 INFO nova.virt.libvirt.driver [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Deletion of /var/lib/nova/instances/036b9f54-5759-43e6-9666-ec39d96c1729_del complete
Oct 09 16:36:45 compute-0 podman[152305]: 2025-10-09 16:36:45.826455737 +0000 UTC m=+0.060943160 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, release=1755695350, architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 09 16:36:45 compute-0 podman[152306]: 2025-10-09 16:36:45.850646073 +0000 UTC m=+0.084225598 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct 09 16:36:46 compute-0 nova_compute[117331]: 2025-10-09 16:36:46.602 2 DEBUG nova.compute.manager [req-eae98c81-6614-4f28-9a35-aec0b37d3ea5 req-424af843-8ee9-4d4b-a551-b2c7b66d1cd4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Received event network-vif-plugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:36:46 compute-0 nova_compute[117331]: 2025-10-09 16:36:46.602 2 DEBUG oslo_concurrency.lockutils [req-eae98c81-6614-4f28-9a35-aec0b37d3ea5 req-424af843-8ee9-4d4b-a551-b2c7b66d1cd4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:36:46 compute-0 nova_compute[117331]: 2025-10-09 16:36:46.602 2 DEBUG oslo_concurrency.lockutils [req-eae98c81-6614-4f28-9a35-aec0b37d3ea5 req-424af843-8ee9-4d4b-a551-b2c7b66d1cd4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:36:46 compute-0 nova_compute[117331]: 2025-10-09 16:36:46.603 2 DEBUG oslo_concurrency.lockutils [req-eae98c81-6614-4f28-9a35-aec0b37d3ea5 req-424af843-8ee9-4d4b-a551-b2c7b66d1cd4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:36:46 compute-0 nova_compute[117331]: 2025-10-09 16:36:46.603 2 DEBUG nova.compute.manager [req-eae98c81-6614-4f28-9a35-aec0b37d3ea5 req-424af843-8ee9-4d4b-a551-b2c7b66d1cd4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] No waiting events found dispatching network-vif-plugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:36:46 compute-0 nova_compute[117331]: 2025-10-09 16:36:46.603 2 WARNING nova.compute.manager [req-eae98c81-6614-4f28-9a35-aec0b37d3ea5 req-424af843-8ee9-4d4b-a551-b2c7b66d1cd4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Received unexpected event network-vif-plugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 for instance with vm_state active and task_state migrating.
Oct 09 16:36:46 compute-0 nova_compute[117331]: 2025-10-09 16:36:46.603 2 DEBUG nova.compute.manager [req-eae98c81-6614-4f28-9a35-aec0b37d3ea5 req-424af843-8ee9-4d4b-a551-b2c7b66d1cd4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Received event network-vif-unplugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:36:46 compute-0 nova_compute[117331]: 2025-10-09 16:36:46.603 2 DEBUG oslo_concurrency.lockutils [req-eae98c81-6614-4f28-9a35-aec0b37d3ea5 req-424af843-8ee9-4d4b-a551-b2c7b66d1cd4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:36:46 compute-0 nova_compute[117331]: 2025-10-09 16:36:46.604 2 DEBUG oslo_concurrency.lockutils [req-eae98c81-6614-4f28-9a35-aec0b37d3ea5 req-424af843-8ee9-4d4b-a551-b2c7b66d1cd4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:36:46 compute-0 nova_compute[117331]: 2025-10-09 16:36:46.604 2 DEBUG oslo_concurrency.lockutils [req-eae98c81-6614-4f28-9a35-aec0b37d3ea5 req-424af843-8ee9-4d4b-a551-b2c7b66d1cd4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:36:46 compute-0 nova_compute[117331]: 2025-10-09 16:36:46.604 2 DEBUG nova.compute.manager [req-eae98c81-6614-4f28-9a35-aec0b37d3ea5 req-424af843-8ee9-4d4b-a551-b2c7b66d1cd4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] No waiting events found dispatching network-vif-unplugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:36:46 compute-0 nova_compute[117331]: 2025-10-09 16:36:46.604 2 DEBUG nova.compute.manager [req-eae98c81-6614-4f28-9a35-aec0b37d3ea5 req-424af843-8ee9-4d4b-a551-b2c7b66d1cd4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Received event network-vif-unplugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:36:46 compute-0 nova_compute[117331]: 2025-10-09 16:36:46.604 2 DEBUG nova.compute.manager [req-eae98c81-6614-4f28-9a35-aec0b37d3ea5 req-424af843-8ee9-4d4b-a551-b2c7b66d1cd4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Received event network-vif-plugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:36:46 compute-0 nova_compute[117331]: 2025-10-09 16:36:46.605 2 DEBUG oslo_concurrency.lockutils [req-eae98c81-6614-4f28-9a35-aec0b37d3ea5 req-424af843-8ee9-4d4b-a551-b2c7b66d1cd4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:36:46 compute-0 nova_compute[117331]: 2025-10-09 16:36:46.605 2 DEBUG oslo_concurrency.lockutils [req-eae98c81-6614-4f28-9a35-aec0b37d3ea5 req-424af843-8ee9-4d4b-a551-b2c7b66d1cd4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:36:46 compute-0 nova_compute[117331]: 2025-10-09 16:36:46.605 2 DEBUG oslo_concurrency.lockutils [req-eae98c81-6614-4f28-9a35-aec0b37d3ea5 req-424af843-8ee9-4d4b-a551-b2c7b66d1cd4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:36:46 compute-0 nova_compute[117331]: 2025-10-09 16:36:46.605 2 DEBUG nova.compute.manager [req-eae98c81-6614-4f28-9a35-aec0b37d3ea5 req-424af843-8ee9-4d4b-a551-b2c7b66d1cd4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] No waiting events found dispatching network-vif-plugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:36:46 compute-0 nova_compute[117331]: 2025-10-09 16:36:46.606 2 WARNING nova.compute.manager [req-eae98c81-6614-4f28-9a35-aec0b37d3ea5 req-424af843-8ee9-4d4b-a551-b2c7b66d1cd4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Received unexpected event network-vif-plugged-19fbb24b-757a-4067-a3e1-7a2c326cb886 for instance with vm_state active and task_state migrating.
Oct 09 16:36:47 compute-0 nova_compute[117331]: 2025-10-09 16:36:47.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:47 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:36:47.670 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:36:50 compute-0 nova_compute[117331]: 2025-10-09 16:36:50.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:52 compute-0 nova_compute[117331]: 2025-10-09 16:36:52.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:55 compute-0 nova_compute[117331]: 2025-10-09 16:36:55.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:55 compute-0 nova_compute[117331]: 2025-10-09 16:36:55.057 2 DEBUG oslo_concurrency.lockutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:36:55 compute-0 nova_compute[117331]: 2025-10-09 16:36:55.058 2 DEBUG oslo_concurrency.lockutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:36:55 compute-0 nova_compute[117331]: 2025-10-09 16:36:55.058 2 DEBUG oslo_concurrency.lockutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "036b9f54-5759-43e6-9666-ec39d96c1729-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:36:55 compute-0 nova_compute[117331]: 2025-10-09 16:36:55.631 2 DEBUG oslo_concurrency.lockutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:36:55 compute-0 nova_compute[117331]: 2025-10-09 16:36:55.631 2 DEBUG oslo_concurrency.lockutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:36:55 compute-0 nova_compute[117331]: 2025-10-09 16:36:55.632 2 DEBUG oslo_concurrency.lockutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:36:55 compute-0 nova_compute[117331]: 2025-10-09 16:36:55.632 2 DEBUG nova.compute.resource_tracker [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:36:56 compute-0 podman[152363]: 2025-10-09 16:36:56.832444113 +0000 UTC m=+0.058265046 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251007)
Oct 09 16:36:56 compute-0 nova_compute[117331]: 2025-10-09 16:36:56.841 2 DEBUG oslo_concurrency.processutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:36:56 compute-0 nova_compute[117331]: 2025-10-09 16:36:56.910 2 DEBUG oslo_concurrency.processutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:36:56 compute-0 nova_compute[117331]: 2025-10-09 16:36:56.911 2 DEBUG oslo_concurrency.processutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:36:56 compute-0 nova_compute[117331]: 2025-10-09 16:36:56.990 2 DEBUG oslo_concurrency.processutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:36:57 compute-0 nova_compute[117331]: 2025-10-09 16:36:57.133 2 WARNING nova.virt.libvirt.driver [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:36:57 compute-0 nova_compute[117331]: 2025-10-09 16:36:57.134 2 DEBUG oslo_concurrency.processutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:36:57 compute-0 nova_compute[117331]: 2025-10-09 16:36:57.154 2 DEBUG oslo_concurrency.processutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.020s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:36:57 compute-0 nova_compute[117331]: 2025-10-09 16:36:57.154 2 DEBUG nova.compute.resource_tracker [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5933MB free_disk=73.22056198120117GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:36:57 compute-0 nova_compute[117331]: 2025-10-09 16:36:57.155 2 DEBUG oslo_concurrency.lockutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:36:57 compute-0 nova_compute[117331]: 2025-10-09 16:36:57.155 2 DEBUG oslo_concurrency.lockutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:36:57 compute-0 nova_compute[117331]: 2025-10-09 16:36:57.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:36:58 compute-0 nova_compute[117331]: 2025-10-09 16:36:58.308 2 DEBUG nova.compute.resource_tracker [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration for instance 036b9f54-5759-43e6-9666-ec39d96c1729 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 09 16:36:58 compute-0 nova_compute[117331]: 2025-10-09 16:36:58.899 2 DEBUG nova.compute.resource_tracker [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Oct 09 16:36:58 compute-0 nova_compute[117331]: 2025-10-09 16:36:58.900 2 INFO nova.compute.resource_tracker [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Updating resource usage from migration 1ddf8509-0504-43ef-bcb0-1ae3ece7dd4d
Oct 09 16:36:59 compute-0 nova_compute[117331]: 2025-10-09 16:36:59.001 2 DEBUG nova.compute.resource_tracker [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration 5886d8ac-4e0a-475a-946f-65da225e1ab5 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:36:59 compute-0 nova_compute[117331]: 2025-10-09 16:36:59.001 2 DEBUG nova.compute.resource_tracker [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration 1ddf8509-0504-43ef-bcb0-1ae3ece7dd4d is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:36:59 compute-0 nova_compute[117331]: 2025-10-09 16:36:59.002 2 DEBUG nova.compute.resource_tracker [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:36:59 compute-0 nova_compute[117331]: 2025-10-09 16:36:59.002 2 DEBUG nova.compute.resource_tracker [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:36:57 up 46 min,  0 user,  load average: 0.44, 0.54, 0.48\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_migrating': '1', 'num_os_type_None': '1', 'num_proj_a60acfe52e4b4b7f912654a59f0978b7': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:36:59 compute-0 nova_compute[117331]: 2025-10-09 16:36:59.047 2 DEBUG nova.compute.provider_tree [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:36:59 compute-0 nova_compute[117331]: 2025-10-09 16:36:59.729 2 DEBUG nova.scheduler.client.report [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:36:59 compute-0 podman[127775]: time="2025-10-09T16:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:36:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:36:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3493 "" "Go-http-client/1.1"
Oct 09 16:37:00 compute-0 nova_compute[117331]: 2025-10-09 16:37:00.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:00 compute-0 nova_compute[117331]: 2025-10-09 16:37:00.851 2 DEBUG nova.compute.resource_tracker [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:37:00 compute-0 nova_compute[117331]: 2025-10-09 16:37:00.852 2 DEBUG oslo_concurrency.lockutils [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.697s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:01 compute-0 nova_compute[117331]: 2025-10-09 16:37:01.373 2 INFO nova.compute.manager [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Oct 09 16:37:01 compute-0 openstack_network_exporter[129925]: ERROR   16:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:37:01 compute-0 openstack_network_exporter[129925]: ERROR   16:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:37:01 compute-0 openstack_network_exporter[129925]: ERROR   16:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:37:01 compute-0 openstack_network_exporter[129925]: ERROR   16:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:37:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:37:01 compute-0 openstack_network_exporter[129925]: ERROR   16:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:37:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:37:01 compute-0 podman[152390]: 2025-10-09 16:37:01.826343955 +0000 UTC m=+0.056810300 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:37:02 compute-0 nova_compute[117331]: 2025-10-09 16:37:02.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:02 compute-0 nova_compute[117331]: 2025-10-09 16:37:02.973 2 INFO nova.scheduler.client.report [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Deleted allocation for migration 5886d8ac-4e0a-475a-946f-65da225e1ab5
Oct 09 16:37:02 compute-0 nova_compute[117331]: 2025-10-09 16:37:02.974 2 DEBUG nova.virt.libvirt.driver [None req-26b0179c-bd04-4969-b6b1-57aad3a406cd 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 036b9f54-5759-43e6-9666-ec39d96c1729] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Oct 09 16:37:04 compute-0 nova_compute[117331]: 2025-10-09 16:37:04.420 2 DEBUG oslo_concurrency.processutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:37:04 compute-0 nova_compute[117331]: 2025-10-09 16:37:04.480 2 DEBUG oslo_concurrency.processutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:37:04 compute-0 nova_compute[117331]: 2025-10-09 16:37:04.481 2 DEBUG oslo_concurrency.processutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:37:04 compute-0 nova_compute[117331]: 2025-10-09 16:37:04.533 2 DEBUG oslo_concurrency.processutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:37:04 compute-0 nova_compute[117331]: 2025-10-09 16:37:04.535 2 DEBUG nova.compute.manager [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Preparing to wait for external event network-vif-plugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:37:04 compute-0 nova_compute[117331]: 2025-10-09 16:37:04.535 2 DEBUG oslo_concurrency.lockutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:04 compute-0 nova_compute[117331]: 2025-10-09 16:37:04.536 2 DEBUG oslo_concurrency.lockutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:04 compute-0 nova_compute[117331]: 2025-10-09 16:37:04.536 2 DEBUG oslo_concurrency.lockutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:05 compute-0 nova_compute[117331]: 2025-10-09 16:37:05.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:07 compute-0 nova_compute[117331]: 2025-10-09 16:37:07.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:07 compute-0 podman[152422]: 2025-10-09 16:37:07.823563225 +0000 UTC m=+0.056457639 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 09 16:37:07 compute-0 podman[152423]: 2025-10-09 16:37:07.834221812 +0000 UTC m=+0.059983891 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251007, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 16:37:10 compute-0 nova_compute[117331]: 2025-10-09 16:37:10.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:12 compute-0 nova_compute[117331]: 2025-10-09 16:37:12.224 2 DEBUG nova.compute.manager [req-b4a29987-2f3c-4f98-b257-f2d038053ec6 req-d27bae33-9feb-4da4-9461-9111480e3389 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received event network-vif-unplugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:37:12 compute-0 nova_compute[117331]: 2025-10-09 16:37:12.225 2 DEBUG oslo_concurrency.lockutils [req-b4a29987-2f3c-4f98-b257-f2d038053ec6 req-d27bae33-9feb-4da4-9461-9111480e3389 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:12 compute-0 nova_compute[117331]: 2025-10-09 16:37:12.225 2 DEBUG oslo_concurrency.lockutils [req-b4a29987-2f3c-4f98-b257-f2d038053ec6 req-d27bae33-9feb-4da4-9461-9111480e3389 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:12 compute-0 nova_compute[117331]: 2025-10-09 16:37:12.225 2 DEBUG oslo_concurrency.lockutils [req-b4a29987-2f3c-4f98-b257-f2d038053ec6 req-d27bae33-9feb-4da4-9461-9111480e3389 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:12 compute-0 nova_compute[117331]: 2025-10-09 16:37:12.225 2 DEBUG nova.compute.manager [req-b4a29987-2f3c-4f98-b257-f2d038053ec6 req-d27bae33-9feb-4da4-9461-9111480e3389 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] No event matching network-vif-unplugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 in dict_keys([('network-vif-plugged', '2cf33271-ebfe-4d2e-9668-8905e0bc34b8')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Oct 09 16:37:12 compute-0 nova_compute[117331]: 2025-10-09 16:37:12.225 2 DEBUG nova.compute.manager [req-b4a29987-2f3c-4f98-b257-f2d038053ec6 req-d27bae33-9feb-4da4-9461-9111480e3389 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received event network-vif-unplugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:37:12 compute-0 nova_compute[117331]: 2025-10-09 16:37:12.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:14 compute-0 nova_compute[117331]: 2025-10-09 16:37:14.133 2 INFO nova.compute.manager [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Took 9.60 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Oct 09 16:37:14 compute-0 nova_compute[117331]: 2025-10-09 16:37:14.387 2 DEBUG nova.compute.manager [req-c3d628b9-4316-42ca-801b-295b9a6cbc8f req-33e5ac26-3f73-4909-871c-e7e3e89eef28 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received event network-vif-plugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:37:14 compute-0 nova_compute[117331]: 2025-10-09 16:37:14.388 2 DEBUG oslo_concurrency.lockutils [req-c3d628b9-4316-42ca-801b-295b9a6cbc8f req-33e5ac26-3f73-4909-871c-e7e3e89eef28 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:14 compute-0 nova_compute[117331]: 2025-10-09 16:37:14.388 2 DEBUG oslo_concurrency.lockutils [req-c3d628b9-4316-42ca-801b-295b9a6cbc8f req-33e5ac26-3f73-4909-871c-e7e3e89eef28 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:14 compute-0 nova_compute[117331]: 2025-10-09 16:37:14.388 2 DEBUG oslo_concurrency.lockutils [req-c3d628b9-4316-42ca-801b-295b9a6cbc8f req-33e5ac26-3f73-4909-871c-e7e3e89eef28 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:14 compute-0 nova_compute[117331]: 2025-10-09 16:37:14.389 2 DEBUG nova.compute.manager [req-c3d628b9-4316-42ca-801b-295b9a6cbc8f req-33e5ac26-3f73-4909-871c-e7e3e89eef28 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Processing event network-vif-plugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:37:14 compute-0 nova_compute[117331]: 2025-10-09 16:37:14.389 2 DEBUG nova.compute.manager [req-c3d628b9-4316-42ca-801b-295b9a6cbc8f req-33e5ac26-3f73-4909-871c-e7e3e89eef28 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received event network-changed-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:37:14 compute-0 nova_compute[117331]: 2025-10-09 16:37:14.389 2 DEBUG nova.compute.manager [req-c3d628b9-4316-42ca-801b-295b9a6cbc8f req-33e5ac26-3f73-4909-871c-e7e3e89eef28 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Refreshing instance network info cache due to event network-changed-2cf33271-ebfe-4d2e-9668-8905e0bc34b8. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:37:14 compute-0 nova_compute[117331]: 2025-10-09 16:37:14.390 2 DEBUG oslo_concurrency.lockutils [req-c3d628b9-4316-42ca-801b-295b9a6cbc8f req-33e5ac26-3f73-4909-871c-e7e3e89eef28 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-85c9d87b-0e28-425f-b54e-c14066ba6918" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:37:14 compute-0 nova_compute[117331]: 2025-10-09 16:37:14.390 2 DEBUG oslo_concurrency.lockutils [req-c3d628b9-4316-42ca-801b-295b9a6cbc8f req-33e5ac26-3f73-4909-871c-e7e3e89eef28 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-85c9d87b-0e28-425f-b54e-c14066ba6918" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:37:14 compute-0 nova_compute[117331]: 2025-10-09 16:37:14.390 2 DEBUG nova.network.neutron [req-c3d628b9-4316-42ca-801b-295b9a6cbc8f req-33e5ac26-3f73-4909-871c-e7e3e89eef28 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Refreshing network info cache for port 2cf33271-ebfe-4d2e-9668-8905e0bc34b8 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:37:14 compute-0 nova_compute[117331]: 2025-10-09 16:37:14.392 2 DEBUG nova.compute.manager [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:37:14 compute-0 ovn_controller[19752]: 2025-10-09T16:37:14Z|00249|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 09 16:37:15 compute-0 nova_compute[117331]: 2025-10-09 16:37:15.039 2 WARNING neutronclient.v2_0.client [req-c3d628b9-4316-42ca-801b-295b9a6cbc8f req-33e5ac26-3f73-4909-871c-e7e3e89eef28 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:37:15 compute-0 nova_compute[117331]: 2025-10-09 16:37:15.041 2 DEBUG nova.compute.manager [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpfhqgjh_d',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='85c9d87b-0e28-425f-b54e-c14066ba6918',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(1ddf8509-0504-43ef-bcb0-1ae3ece7dd4d),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Oct 09 16:37:15 compute-0 nova_compute[117331]: 2025-10-09 16:37:15.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:15 compute-0 nova_compute[117331]: 2025-10-09 16:37:15.580 2 WARNING neutronclient.v2_0.client [req-c3d628b9-4316-42ca-801b-295b9a6cbc8f req-33e5ac26-3f73-4909-871c-e7e3e89eef28 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:37:15 compute-0 nova_compute[117331]: 2025-10-09 16:37:15.597 2 DEBUG nova.objects.instance [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'migration_context' on Instance uuid 85c9d87b-0e28-425f-b54e-c14066ba6918 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:37:15 compute-0 nova_compute[117331]: 2025-10-09 16:37:15.598 2 DEBUG nova.virt.libvirt.driver [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Oct 09 16:37:15 compute-0 nova_compute[117331]: 2025-10-09 16:37:15.600 2 DEBUG nova.virt.libvirt.driver [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:37:15 compute-0 nova_compute[117331]: 2025-10-09 16:37:15.601 2 DEBUG nova.virt.libvirt.driver [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:37:15 compute-0 nova_compute[117331]: 2025-10-09 16:37:15.717 2 DEBUG nova.network.neutron [req-c3d628b9-4316-42ca-801b-295b9a6cbc8f req-33e5ac26-3f73-4909-871c-e7e3e89eef28 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Updated VIF entry in instance network info cache for port 2cf33271-ebfe-4d2e-9668-8905e0bc34b8. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Oct 09 16:37:15 compute-0 nova_compute[117331]: 2025-10-09 16:37:15.718 2 DEBUG nova.network.neutron [req-c3d628b9-4316-42ca-801b-295b9a6cbc8f req-33e5ac26-3f73-4909-871c-e7e3e89eef28 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Updating instance_info_cache with network_info: [{"id": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "address": "fa:16:3e:5a:42:08", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf33271-eb", "ovs_interfaceid": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:37:16 compute-0 nova_compute[117331]: 2025-10-09 16:37:16.103 2 DEBUG nova.virt.libvirt.driver [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:37:16 compute-0 nova_compute[117331]: 2025-10-09 16:37:16.104 2 DEBUG nova.virt.libvirt.driver [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:37:16 compute-0 nova_compute[117331]: 2025-10-09 16:37:16.144 2 DEBUG nova.virt.libvirt.vif [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:35:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-512585471',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-5125854',id=26,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:35:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a60acfe52e4b4b7f912654a59f0978b7',ramdisk_id='',reservation_id='r-52wfk6ph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader,manager',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:35:38Z,user_data=None,user_id='685b4924c5a04af7ae6f4a328bb50f14',uuid=85c9d87b-0e28-425f-b54e-c14066ba6918,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "address": "fa:16:3e:5a:42:08", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2cf33271-eb", "ovs_interfaceid": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:37:16 compute-0 nova_compute[117331]: 2025-10-09 16:37:16.144 2 DEBUG nova.network.os_vif_util [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "address": "fa:16:3e:5a:42:08", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2cf33271-eb", "ovs_interfaceid": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:37:16 compute-0 nova_compute[117331]: 2025-10-09 16:37:16.145 2 DEBUG nova.network.os_vif_util [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:42:08,bridge_name='br-int',has_traffic_filtering=True,id=2cf33271-ebfe-4d2e-9668-8905e0bc34b8,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf33271-eb') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:37:16 compute-0 nova_compute[117331]: 2025-10-09 16:37:16.145 2 DEBUG nova.virt.libvirt.migration [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Updating guest XML with vif config: <interface type="ethernet">
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <mac address="fa:16:3e:5a:42:08"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <model type="virtio"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <mtu size="1442"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <target dev="tap2cf33271-eb"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]: </interface>
Oct 09 16:37:16 compute-0 nova_compute[117331]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Oct 09 16:37:16 compute-0 nova_compute[117331]: 2025-10-09 16:37:16.146 2 DEBUG nova.virt.libvirt.migration [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <name>instance-0000001a</name>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <uuid>85c9d87b-0e28-425f-b54e-c14066ba6918</uuid>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadStabilizationStrategy-server-512585471</nova:name>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:35:32</nova:creationTime>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:37:16 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:37:16 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:user uuid="685b4924c5a04af7ae6f4a328bb50f14">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin</nova:user>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:project uuid="a60acfe52e4b4b7f912654a59f0978b7">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673</nova:project>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:port uuid="2cf33271-ebfe-4d2e-9668-8905e0bc34b8">
Oct 09 16:37:16 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <system>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <entry name="serial">85c9d87b-0e28-425f-b54e-c14066ba6918</entry>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <entry name="uuid">85c9d87b-0e28-425f-b54e-c14066ba6918</entry>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </system>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <os>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </os>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <features>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </features>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk.config"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:5a:42:08"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2cf33271-eb"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/console.log" append="off"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       </target>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/console.log" append="off"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </console>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </input>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <video>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </video>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]: </domain>
Oct 09 16:37:16 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Oct 09 16:37:16 compute-0 nova_compute[117331]: 2025-10-09 16:37:16.147 2 DEBUG nova.virt.libvirt.migration [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <name>instance-0000001a</name>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <uuid>85c9d87b-0e28-425f-b54e-c14066ba6918</uuid>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadStabilizationStrategy-server-512585471</nova:name>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:35:32</nova:creationTime>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:37:16 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:37:16 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:user uuid="685b4924c5a04af7ae6f4a328bb50f14">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin</nova:user>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:project uuid="a60acfe52e4b4b7f912654a59f0978b7">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673</nova:project>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:port uuid="2cf33271-ebfe-4d2e-9668-8905e0bc34b8">
Oct 09 16:37:16 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <system>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <entry name="serial">85c9d87b-0e28-425f-b54e-c14066ba6918</entry>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <entry name="uuid">85c9d87b-0e28-425f-b54e-c14066ba6918</entry>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </system>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <os>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </os>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <features>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </features>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk.config"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:5a:42:08"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2cf33271-eb"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/console.log" append="off"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       </target>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/console.log" append="off"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </console>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </input>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <video>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </video>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]: </domain>
Oct 09 16:37:16 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Oct 09 16:37:16 compute-0 nova_compute[117331]: 2025-10-09 16:37:16.148 2 DEBUG nova.virt.libvirt.migration [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _update_pci_xml output xml=<domain type="kvm">
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <name>instance-0000001a</name>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <uuid>85c9d87b-0e28-425f-b54e-c14066ba6918</uuid>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadStabilizationStrategy-server-512585471</nova:name>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:35:32</nova:creationTime>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:37:16 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:37:16 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:user uuid="685b4924c5a04af7ae6f4a328bb50f14">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin</nova:user>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:project uuid="a60acfe52e4b4b7f912654a59f0978b7">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673</nova:project>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <nova:port uuid="2cf33271-ebfe-4d2e-9668-8905e0bc34b8">
Oct 09 16:37:16 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <system>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <entry name="serial">85c9d87b-0e28-425f-b54e-c14066ba6918</entry>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <entry name="uuid">85c9d87b-0e28-425f-b54e-c14066ba6918</entry>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </system>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <os>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </os>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <features>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </features>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/disk.config"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:5a:42:08"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2cf33271-eb"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/console.log" append="off"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:37:16 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       </target>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918/console.log" append="off"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </console>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </input>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <video>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </video>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:37:16 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:37:16 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:37:16 compute-0 nova_compute[117331]: </domain>
Oct 09 16:37:16 compute-0 nova_compute[117331]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Oct 09 16:37:16 compute-0 nova_compute[117331]: 2025-10-09 16:37:16.149 2 DEBUG nova.virt.libvirt.driver [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Oct 09 16:37:16 compute-0 nova_compute[117331]: 2025-10-09 16:37:16.262 2 DEBUG oslo_concurrency.lockutils [req-c3d628b9-4316-42ca-801b-295b9a6cbc8f req-33e5ac26-3f73-4909-871c-e7e3e89eef28 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-85c9d87b-0e28-425f-b54e-c14066ba6918" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:37:16 compute-0 nova_compute[117331]: 2025-10-09 16:37:16.606 2 DEBUG nova.virt.libvirt.migration [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Current None elapsed 1 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:37:16 compute-0 nova_compute[117331]: 2025-10-09 16:37:16.607 2 INFO nova.virt.libvirt.migration [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Increasing downtime to 50 ms after 0 sec elapsed time
Oct 09 16:37:16 compute-0 podman[152462]: 2025-10-09 16:37:16.871299817 +0000 UTC m=+0.086783719 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, container_name=openstack_network_exporter)
Oct 09 16:37:16 compute-0 podman[152463]: 2025-10-09 16:37:16.903081503 +0000 UTC m=+0.121750966 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.4)
Oct 09 16:37:17 compute-0 nova_compute[117331]: 2025-10-09 16:37:17.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:17 compute-0 nova_compute[117331]: 2025-10-09 16:37:17.646 2 INFO nova.virt.libvirt.driver [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Oct 09 16:37:18 compute-0 nova_compute[117331]: 2025-10-09 16:37:18.150 2 DEBUG nova.virt.libvirt.migration [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Current 50 elapsed 2 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:37:18 compute-0 nova_compute[117331]: 2025-10-09 16:37:18.151 2 DEBUG nova.virt.libvirt.migration [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Oct 09 16:37:18 compute-0 nova_compute[117331]: 2025-10-09 16:37:18.654 2 DEBUG nova.virt.libvirt.migration [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Current 50 elapsed 3 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:37:18 compute-0 nova_compute[117331]: 2025-10-09 16:37:18.655 2 DEBUG nova.virt.libvirt.migration [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.177 2 DEBUG nova.virt.libvirt.migration [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Current 50 elapsed 3 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.178 2 DEBUG nova.virt.libvirt.migration [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Oct 09 16:37:19 compute-0 kernel: tap2cf33271-eb (unregistering): left promiscuous mode
Oct 09 16:37:19 compute-0 NetworkManager[1028]: <info>  [1760027839.2477] device (tap2cf33271-eb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:37:19 compute-0 ovn_controller[19752]: 2025-10-09T16:37:19Z|00250|binding|INFO|Releasing lport 2cf33271-ebfe-4d2e-9668-8905e0bc34b8 from this chassis (sb_readonly=0)
Oct 09 16:37:19 compute-0 ovn_controller[19752]: 2025-10-09T16:37:19Z|00251|binding|INFO|Setting lport 2cf33271-ebfe-4d2e-9668-8905e0bc34b8 down in Southbound
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:19 compute-0 ovn_controller[19752]: 2025-10-09T16:37:19Z|00252|binding|INFO|Removing iface tap2cf33271-eb ovn-installed in OVS
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:19.260 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:42:08 10.100.0.7'], port_security=['fa:16:3e:5a:42:08 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '2bd8bf21-1f6b-42c9-9656-9a72fa8dcbf3'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '85c9d87b-0e28-425f-b54e-c14066ba6918', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a60acfe52e4b4b7f912654a59f0978b7', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'c47ae156-dc3a-4eb3-8eb2-126be8ba5497', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f19947fc-c2ef-4762-adfc-471bb24a038d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=2cf33271-ebfe-4d2e-9668-8905e0bc34b8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:37:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:19.261 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 2cf33271-ebfe-4d2e-9668-8905e0bc34b8 in datapath cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 unbound from our chassis
Oct 09 16:37:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:19.262 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:37:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:19.263 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[6aca9e0d-5fb5-4283-91d7-221aa26f7123]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:37:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:19.264 28613 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 namespace which is not needed anymore
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:19 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Oct 09 16:37:19 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000001a.scope: Consumed 16.648s CPU time.
Oct 09 16:37:19 compute-0 systemd-machined[77487]: Machine qemu-20-instance-0000001a terminated.
Oct 09 16:37:19 compute-0 neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8[151899]: [NOTICE]   (151903) : haproxy version is 3.0.5-8e879a5
Oct 09 16:37:19 compute-0 neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8[151899]: [NOTICE]   (151903) : path to executable is /usr/sbin/haproxy
Oct 09 16:37:19 compute-0 neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8[151899]: [WARNING]  (151903) : Exiting Master process...
Oct 09 16:37:19 compute-0 podman[152546]: 2025-10-09 16:37:19.36723873 +0000 UTC m=+0.025928212 container kill 228a7db78720641cf3f58f4b57cfac2aacc540265ef0151826f25be2f2b1f4ac (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 09 16:37:19 compute-0 neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8[151899]: [ALERT]    (151903) : Current worker (151905) exited with code 143 (Terminated)
Oct 09 16:37:19 compute-0 neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8[151899]: [WARNING]  (151903) : All workers exited. Exiting... (0)
Oct 09 16:37:19 compute-0 systemd[1]: libpod-228a7db78720641cf3f58f4b57cfac2aacc540265ef0151826f25be2f2b1f4ac.scope: Deactivated successfully.
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.395 2 DEBUG nova.compute.manager [req-5e8040f3-f3ac-4050-b3cd-8c1b0252e987 req-f23be7da-9088-4e2a-a4ca-55ebcbbab32f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received event network-vif-unplugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.396 2 DEBUG oslo_concurrency.lockutils [req-5e8040f3-f3ac-4050-b3cd-8c1b0252e987 req-f23be7da-9088-4e2a-a4ca-55ebcbbab32f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.396 2 DEBUG oslo_concurrency.lockutils [req-5e8040f3-f3ac-4050-b3cd-8c1b0252e987 req-f23be7da-9088-4e2a-a4ca-55ebcbbab32f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.396 2 DEBUG oslo_concurrency.lockutils [req-5e8040f3-f3ac-4050-b3cd-8c1b0252e987 req-f23be7da-9088-4e2a-a4ca-55ebcbbab32f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.397 2 DEBUG nova.compute.manager [req-5e8040f3-f3ac-4050-b3cd-8c1b0252e987 req-f23be7da-9088-4e2a-a4ca-55ebcbbab32f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] No waiting events found dispatching network-vif-unplugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.397 2 DEBUG nova.compute.manager [req-5e8040f3-f3ac-4050-b3cd-8c1b0252e987 req-f23be7da-9088-4e2a-a4ca-55ebcbbab32f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received event network-vif-unplugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:37:19 compute-0 podman[152561]: 2025-10-09 16:37:19.409164608 +0000 UTC m=+0.023221046 container died 228a7db78720641cf3f58f4b57cfac2aacc540265ef0151826f25be2f2b1f4ac (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 16:37:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-228a7db78720641cf3f58f4b57cfac2aacc540265ef0151826f25be2f2b1f4ac-userdata-shm.mount: Deactivated successfully.
Oct 09 16:37:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bf16e2b8255c61faf23874b9e665f78971a169aa66798d3eb4b7358e16e9a5f-merged.mount: Deactivated successfully.
Oct 09 16:37:19 compute-0 podman[152561]: 2025-10-09 16:37:19.468248899 +0000 UTC m=+0.082305337 container cleanup 228a7db78720641cf3f58f4b57cfac2aacc540265ef0151826f25be2f2b1f4ac (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 16:37:19 compute-0 systemd[1]: libpod-conmon-228a7db78720641cf3f58f4b57cfac2aacc540265ef0151826f25be2f2b1f4ac.scope: Deactivated successfully.
Oct 09 16:37:19 compute-0 podman[152563]: 2025-10-09 16:37:19.488234341 +0000 UTC m=+0.093283244 container remove 228a7db78720641cf3f58f4b57cfac2aacc540265ef0151826f25be2f2b1f4ac (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.487 2 DEBUG nova.virt.libvirt.driver [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.488 2 DEBUG nova.virt.libvirt.driver [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.488 2 DEBUG nova.virt.libvirt.driver [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Oct 09 16:37:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:19.507 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3f32005f-40d3-43dd-b50d-0b7b486dc731]: (4, ("Thu Oct  9 04:37:19 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 (228a7db78720641cf3f58f4b57cfac2aacc540265ef0151826f25be2f2b1f4ac)\n228a7db78720641cf3f58f4b57cfac2aacc540265ef0151826f25be2f2b1f4ac\nThu Oct  9 04:37:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 (228a7db78720641cf3f58f4b57cfac2aacc540265ef0151826f25be2f2b1f4ac)\n228a7db78720641cf3f58f4b57cfac2aacc540265ef0151826f25be2f2b1f4ac\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:37:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:19.509 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[79e44696-abb8-45a0-81eb-b42033732e85]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:37:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:19.509 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:37:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:19.509 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[358d4865-1744-4c67-a3e9-7e9359c5cff7]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:37:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:19.510 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf3aa351-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:19 compute-0 kernel: tapcf3aa351-40: left promiscuous mode
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:19.528 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac9e6ec-5285-4f0e-8581-e754c2be21f7]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:37:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:19.560 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[5d43f397-5c62-4f36-9908-e50c03c8aa2a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:37:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:19.561 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca41532-3e67-46bd-9262-5a51b7bd42c8]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:37:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:19.576 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[fa068192-99ae-48fb-9331-96d7b2a16206]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 268030, 'reachable_time': 40116, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 152614, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:37:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:19.578 28727 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 09 16:37:19 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:19.578 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[52183390-0682-4926-9a67-a177f4dc008c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:37:19 compute-0 systemd[1]: run-netns-ovnmeta\x2dcf3aa351\x2d4d26\x2d41f3\x2d8cb5\x2d1ff2d3d995c8.mount: Deactivated successfully.
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.680 2 DEBUG nova.virt.libvirt.guest [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '85c9d87b-0e28-425f-b54e-c14066ba6918' (instance-0000001a) get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.680 2 INFO nova.virt.libvirt.driver [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Migration operation has completed
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.680 2 INFO nova.compute.manager [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] _post_live_migration() is started..
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.694 2 WARNING neutronclient.v2_0.client [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:37:19 compute-0 nova_compute[117331]: 2025-10-09 16:37:19.695 2 WARNING neutronclient.v2_0.client [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:37:20 compute-0 nova_compute[117331]: 2025-10-09 16:37:20.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.368 2 DEBUG nova.network.neutron [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Activated binding for port 2cf33271-ebfe-4d2e-9668-8905e0bc34b8 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.369 2 DEBUG nova.compute.manager [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "address": "fa:16:3e:5a:42:08", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf33271-eb", "ovs_interfaceid": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.370 2 DEBUG nova.virt.libvirt.vif [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:35:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-512585471',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-5125854',id=26,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:35:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a60acfe52e4b4b7f912654a59f0978b7',ramdisk_id='',reservation_id='r-52wfk6ph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader,manager',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:36:20Z,user_data=None,user_id='685b4924c5a04af7ae6f4a328bb50f14',uuid=85c9d87b-0e28-425f-b54e-c14066ba6918,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "address": "fa:16:3e:5a:42:08", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf33271-eb", "ovs_interfaceid": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.370 2 DEBUG nova.network.os_vif_util [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "address": "fa:16:3e:5a:42:08", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2cf33271-eb", "ovs_interfaceid": "2cf33271-ebfe-4d2e-9668-8905e0bc34b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.372 2 DEBUG nova.network.os_vif_util [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:42:08,bridge_name='br-int',has_traffic_filtering=True,id=2cf33271-ebfe-4d2e-9668-8905e0bc34b8,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf33271-eb') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.373 2 DEBUG os_vif [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:42:08,bridge_name='br-int',has_traffic_filtering=True,id=2cf33271-ebfe-4d2e-9668-8905e0bc34b8,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf33271-eb') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.375 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cf33271-eb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.379 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=657c5bea-8359-44ce-afcd-e6544c668832) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.383 2 INFO os_vif [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:42:08,bridge_name='br-int',has_traffic_filtering=True,id=2cf33271-ebfe-4d2e-9668-8905e0bc34b8,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2cf33271-eb')
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.383 2 DEBUG oslo_concurrency.lockutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.384 2 DEBUG oslo_concurrency.lockutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.384 2 DEBUG oslo_concurrency.lockutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.384 2 DEBUG nova.compute.manager [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.385 2 INFO nova.virt.libvirt.driver [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Deleting instance files /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918_del
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.386 2 INFO nova.virt.libvirt.driver [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Deletion of /var/lib/nova/instances/85c9d87b-0e28-425f-b54e-c14066ba6918_del complete
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.453 2 DEBUG nova.compute.manager [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received event network-vif-plugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.454 2 DEBUG oslo_concurrency.lockutils [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.454 2 DEBUG oslo_concurrency.lockutils [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.455 2 DEBUG oslo_concurrency.lockutils [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.455 2 DEBUG nova.compute.manager [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] No waiting events found dispatching network-vif-plugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.455 2 WARNING nova.compute.manager [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received unexpected event network-vif-plugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 for instance with vm_state active and task_state migrating.
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.455 2 DEBUG nova.compute.manager [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received event network-vif-unplugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.455 2 DEBUG oslo_concurrency.lockutils [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.456 2 DEBUG oslo_concurrency.lockutils [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.456 2 DEBUG oslo_concurrency.lockutils [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.456 2 DEBUG nova.compute.manager [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] No waiting events found dispatching network-vif-unplugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.456 2 DEBUG nova.compute.manager [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received event network-vif-unplugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.456 2 DEBUG nova.compute.manager [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received event network-vif-unplugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.456 2 DEBUG oslo_concurrency.lockutils [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.456 2 DEBUG oslo_concurrency.lockutils [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.457 2 DEBUG oslo_concurrency.lockutils [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.457 2 DEBUG nova.compute.manager [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] No waiting events found dispatching network-vif-unplugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.457 2 DEBUG nova.compute.manager [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received event network-vif-unplugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.457 2 DEBUG nova.compute.manager [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received event network-vif-plugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.457 2 DEBUG oslo_concurrency.lockutils [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.457 2 DEBUG oslo_concurrency.lockutils [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.458 2 DEBUG oslo_concurrency.lockutils [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.458 2 DEBUG nova.compute.manager [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] No waiting events found dispatching network-vif-plugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.458 2 WARNING nova.compute.manager [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received unexpected event network-vif-plugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 for instance with vm_state active and task_state migrating.
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.458 2 DEBUG nova.compute.manager [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received event network-vif-plugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.458 2 DEBUG oslo_concurrency.lockutils [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.458 2 DEBUG oslo_concurrency.lockutils [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.458 2 DEBUG oslo_concurrency.lockutils [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.459 2 DEBUG nova.compute.manager [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] No waiting events found dispatching network-vif-plugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:37:21 compute-0 nova_compute[117331]: 2025-10-09 16:37:21.459 2 WARNING nova.compute.manager [req-3bc31f19-7612-4794-94cb-7459692ad212 req-0b5875a2-85c1-470d-95ae-99a006d5de64 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Received unexpected event network-vif-plugged-2cf33271-ebfe-4d2e-9668-8905e0bc34b8 for instance with vm_state active and task_state migrating.
Oct 09 16:37:22 compute-0 nova_compute[117331]: 2025-10-09 16:37:22.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:26 compute-0 nova_compute[117331]: 2025-10-09 16:37:26.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:27 compute-0 nova_compute[117331]: 2025-10-09 16:37:27.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:27 compute-0 podman[152615]: 2025-10-09 16:37:27.832672655 +0000 UTC m=+0.059271736 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Oct 09 16:37:29 compute-0 podman[127775]: time="2025-10-09T16:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:37:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:37:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3028 "" "Go-http-client/1.1"
Oct 09 16:37:30 compute-0 nova_compute[117331]: 2025-10-09 16:37:30.421 2 DEBUG oslo_concurrency.lockutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:30 compute-0 nova_compute[117331]: 2025-10-09 16:37:30.422 2 DEBUG oslo_concurrency.lockutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:30 compute-0 nova_compute[117331]: 2025-10-09 16:37:30.422 2 DEBUG oslo_concurrency.lockutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "85c9d87b-0e28-425f-b54e-c14066ba6918-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:30 compute-0 nova_compute[117331]: 2025-10-09 16:37:30.940 2 DEBUG oslo_concurrency.lockutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:30 compute-0 nova_compute[117331]: 2025-10-09 16:37:30.940 2 DEBUG oslo_concurrency.lockutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:30 compute-0 nova_compute[117331]: 2025-10-09 16:37:30.941 2 DEBUG oslo_concurrency.lockutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:30 compute-0 nova_compute[117331]: 2025-10-09 16:37:30.941 2 DEBUG nova.compute.resource_tracker [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:37:31 compute-0 nova_compute[117331]: 2025-10-09 16:37:31.061 2 WARNING nova.virt.libvirt.driver [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:37:31 compute-0 nova_compute[117331]: 2025-10-09 16:37:31.062 2 DEBUG oslo_concurrency.processutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:37:31 compute-0 nova_compute[117331]: 2025-10-09 16:37:31.078 2 DEBUG oslo_concurrency.processutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.016s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:37:31 compute-0 nova_compute[117331]: 2025-10-09 16:37:31.079 2 DEBUG nova.compute.resource_tracker [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6164MB free_disk=73.24945068359375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:37:31 compute-0 nova_compute[117331]: 2025-10-09 16:37:31.079 2 DEBUG oslo_concurrency.lockutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:31 compute-0 nova_compute[117331]: 2025-10-09 16:37:31.079 2 DEBUG oslo_concurrency.lockutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:31 compute-0 nova_compute[117331]: 2025-10-09 16:37:31.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:37:31 compute-0 nova_compute[117331]: 2025-10-09 16:37:31.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:37:31 compute-0 nova_compute[117331]: 2025-10-09 16:37:31.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:31 compute-0 openstack_network_exporter[129925]: ERROR   16:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:37:31 compute-0 openstack_network_exporter[129925]: ERROR   16:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:37:31 compute-0 openstack_network_exporter[129925]: ERROR   16:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:37:31 compute-0 openstack_network_exporter[129925]: ERROR   16:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:37:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:37:31 compute-0 openstack_network_exporter[129925]: ERROR   16:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:37:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:37:32 compute-0 nova_compute[117331]: 2025-10-09 16:37:32.101 2 DEBUG nova.compute.resource_tracker [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration for instance 85c9d87b-0e28-425f-b54e-c14066ba6918 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 09 16:37:32 compute-0 nova_compute[117331]: 2025-10-09 16:37:32.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:32 compute-0 nova_compute[117331]: 2025-10-09 16:37:32.613 2 DEBUG nova.compute.resource_tracker [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Oct 09 16:37:32 compute-0 nova_compute[117331]: 2025-10-09 16:37:32.639 2 DEBUG nova.compute.resource_tracker [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration 1ddf8509-0504-43ef-bcb0-1ae3ece7dd4d is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:37:32 compute-0 nova_compute[117331]: 2025-10-09 16:37:32.640 2 DEBUG nova.compute.resource_tracker [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:37:32 compute-0 nova_compute[117331]: 2025-10-09 16:37:32.640 2 DEBUG nova.compute.resource_tracker [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:37:31 up 46 min,  0 user,  load average: 0.33, 0.50, 0.47\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:37:32 compute-0 nova_compute[117331]: 2025-10-09 16:37:32.667 2 DEBUG nova.compute.provider_tree [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:37:32 compute-0 podman[152637]: 2025-10-09 16:37:32.811402529 +0000 UTC m=+0.046743030 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:37:33 compute-0 nova_compute[117331]: 2025-10-09 16:37:33.174 2 DEBUG nova.scheduler.client.report [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:37:33 compute-0 nova_compute[117331]: 2025-10-09 16:37:33.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:37:33 compute-0 nova_compute[117331]: 2025-10-09 16:37:33.682 2 DEBUG nova.compute.resource_tracker [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:37:33 compute-0 nova_compute[117331]: 2025-10-09 16:37:33.683 2 DEBUG oslo_concurrency.lockutils [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.604s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:33 compute-0 nova_compute[117331]: 2025-10-09 16:37:33.704 2 INFO nova.compute.manager [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Oct 09 16:37:33 compute-0 nova_compute[117331]: 2025-10-09 16:37:33.824 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:33 compute-0 nova_compute[117331]: 2025-10-09 16:37:33.825 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:33 compute-0 nova_compute[117331]: 2025-10-09 16:37:33.825 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:33 compute-0 nova_compute[117331]: 2025-10-09 16:37:33.825 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:37:34 compute-0 nova_compute[117331]: 2025-10-09 16:37:34.047 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:37:34 compute-0 nova_compute[117331]: 2025-10-09 16:37:34.048 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:37:34 compute-0 nova_compute[117331]: 2025-10-09 16:37:34.070 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.022s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:37:34 compute-0 nova_compute[117331]: 2025-10-09 16:37:34.071 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6168MB free_disk=73.24945068359375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:37:34 compute-0 nova_compute[117331]: 2025-10-09 16:37:34.071 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:34 compute-0 nova_compute[117331]: 2025-10-09 16:37:34.072 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:34 compute-0 nova_compute[117331]: 2025-10-09 16:37:34.797 2 INFO nova.scheduler.client.report [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Deleted allocation for migration 1ddf8509-0504-43ef-bcb0-1ae3ece7dd4d
Oct 09 16:37:34 compute-0 nova_compute[117331]: 2025-10-09 16:37:34.798 2 DEBUG nova.virt.libvirt.driver [None req-fe59cf1f-038c-48fc-a686-85e22eb4925e 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 85c9d87b-0e28-425f-b54e-c14066ba6918] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Oct 09 16:37:35 compute-0 nova_compute[117331]: 2025-10-09 16:37:35.111 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:37:35 compute-0 nova_compute[117331]: 2025-10-09 16:37:35.112 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:37:34 up 46 min,  0 user,  load average: 0.31, 0.49, 0.47\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:37:35 compute-0 nova_compute[117331]: 2025-10-09 16:37:35.151 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:37:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:35.334 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:35.334 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:35.334 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:35 compute-0 nova_compute[117331]: 2025-10-09 16:37:35.660 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:37:36 compute-0 nova_compute[117331]: 2025-10-09 16:37:36.168 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:37:36 compute-0 nova_compute[117331]: 2025-10-09 16:37:36.169 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.097s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:36 compute-0 nova_compute[117331]: 2025-10-09 16:37:36.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:37 compute-0 nova_compute[117331]: 2025-10-09 16:37:37.165 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:37:37 compute-0 nova_compute[117331]: 2025-10-09 16:37:37.165 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:37:37 compute-0 nova_compute[117331]: 2025-10-09 16:37:37.166 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:37:37 compute-0 nova_compute[117331]: 2025-10-09 16:37:37.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:37:37 compute-0 nova_compute[117331]: 2025-10-09 16:37:37.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:38 compute-0 nova_compute[117331]: 2025-10-09 16:37:38.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:37:38 compute-0 podman[152663]: 2025-10-09 16:37:38.854279836 +0000 UTC m=+0.075701177 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 09 16:37:38 compute-0 podman[152664]: 2025-10-09 16:37:38.869551139 +0000 UTC m=+0.088130811 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible)
Oct 09 16:37:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:41.126 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:37:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:41.127 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:37:41 compute-0 nova_compute[117331]: 2025-10-09 16:37:41.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:41 compute-0 nova_compute[117331]: 2025-10-09 16:37:41.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:41 compute-0 nova_compute[117331]: 2025-10-09 16:37:41.965 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:37:41 compute-0 nova_compute[117331]: 2025-10-09 16:37:41.966 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:37:42 compute-0 nova_compute[117331]: 2025-10-09 16:37:42.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:37:42 compute-0 nova_compute[117331]: 2025-10-09 16:37:42.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 09 16:37:42 compute-0 nova_compute[117331]: 2025-10-09 16:37:42.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:42 compute-0 nova_compute[117331]: 2025-10-09 16:37:42.815 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 09 16:37:46 compute-0 nova_compute[117331]: 2025-10-09 16:37:46.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:37:46 compute-0 nova_compute[117331]: 2025-10-09 16:37:46.306 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 09 16:37:46 compute-0 nova_compute[117331]: 2025-10-09 16:37:46.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:47 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:37:47.129 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:37:47 compute-0 nova_compute[117331]: 2025-10-09 16:37:47.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:47 compute-0 podman[152703]: 2025-10-09 16:37:47.809852211 +0000 UTC m=+0.046832744 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Oct 09 16:37:47 compute-0 podman[152704]: 2025-10-09 16:37:47.837168166 +0000 UTC m=+0.070983439 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4)
Oct 09 16:37:51 compute-0 nova_compute[117331]: 2025-10-09 16:37:51.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:52 compute-0 nova_compute[117331]: 2025-10-09 16:37:52.133 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "5963f55d-5e3f-4b07-86ef-554333267b8f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:52 compute-0 nova_compute[117331]: 2025-10-09 16:37:52.133 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:52 compute-0 nova_compute[117331]: 2025-10-09 16:37:52.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:52 compute-0 nova_compute[117331]: 2025-10-09 16:37:52.639 2 DEBUG nova.compute.manager [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:37:53 compute-0 nova_compute[117331]: 2025-10-09 16:37:53.187 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:53 compute-0 nova_compute[117331]: 2025-10-09 16:37:53.187 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:53 compute-0 nova_compute[117331]: 2025-10-09 16:37:53.194 2 DEBUG nova.virt.hardware [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:37:53 compute-0 nova_compute[117331]: 2025-10-09 16:37:53.194 2 INFO nova.compute.claims [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:37:54 compute-0 nova_compute[117331]: 2025-10-09 16:37:54.266 2 DEBUG nova.compute.provider_tree [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:37:54 compute-0 nova_compute[117331]: 2025-10-09 16:37:54.780 2 DEBUG nova.scheduler.client.report [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:37:55 compute-0 nova_compute[117331]: 2025-10-09 16:37:55.291 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.104s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:55 compute-0 nova_compute[117331]: 2025-10-09 16:37:55.292 2 DEBUG nova.compute.manager [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:37:55 compute-0 nova_compute[117331]: 2025-10-09 16:37:55.804 2 DEBUG nova.compute.manager [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:37:55 compute-0 nova_compute[117331]: 2025-10-09 16:37:55.804 2 DEBUG nova.network.neutron [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:37:55 compute-0 nova_compute[117331]: 2025-10-09 16:37:55.805 2 WARNING neutronclient.v2_0.client [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:37:55 compute-0 nova_compute[117331]: 2025-10-09 16:37:55.805 2 WARNING neutronclient.v2_0.client [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:37:56 compute-0 nova_compute[117331]: 2025-10-09 16:37:56.313 2 INFO nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:37:56 compute-0 nova_compute[117331]: 2025-10-09 16:37:56.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:56 compute-0 nova_compute[117331]: 2025-10-09 16:37:56.823 2 DEBUG nova.compute.manager [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:37:57 compute-0 nova_compute[117331]: 2025-10-09 16:37:57.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:37:57 compute-0 nova_compute[117331]: 2025-10-09 16:37:57.841 2 DEBUG nova.compute.manager [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:37:57 compute-0 nova_compute[117331]: 2025-10-09 16:37:57.844 2 DEBUG nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:37:57 compute-0 nova_compute[117331]: 2025-10-09 16:37:57.844 2 INFO nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Creating image(s)
Oct 09 16:37:57 compute-0 nova_compute[117331]: 2025-10-09 16:37:57.845 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "/var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:57 compute-0 nova_compute[117331]: 2025-10-09 16:37:57.845 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "/var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:57 compute-0 nova_compute[117331]: 2025-10-09 16:37:57.846 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "/var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:57 compute-0 nova_compute[117331]: 2025-10-09 16:37:57.847 2 DEBUG oslo_utils.imageutils.format_inspector [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:37:57 compute-0 nova_compute[117331]: 2025-10-09 16:37:57.851 2 DEBUG oslo_utils.imageutils.format_inspector [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:37:57 compute-0 nova_compute[117331]: 2025-10-09 16:37:57.853 2 DEBUG oslo_concurrency.processutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:37:57 compute-0 nova_compute[117331]: 2025-10-09 16:37:57.906 2 DEBUG oslo_concurrency.processutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:37:57 compute-0 nova_compute[117331]: 2025-10-09 16:37:57.907 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:57 compute-0 nova_compute[117331]: 2025-10-09 16:37:57.908 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:57 compute-0 nova_compute[117331]: 2025-10-09 16:37:57.909 2 DEBUG oslo_utils.imageutils.format_inspector [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:37:57 compute-0 nova_compute[117331]: 2025-10-09 16:37:57.915 2 DEBUG oslo_utils.imageutils.format_inspector [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:37:57 compute-0 nova_compute[117331]: 2025-10-09 16:37:57.915 2 DEBUG oslo_concurrency.processutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:37:57 compute-0 nova_compute[117331]: 2025-10-09 16:37:57.952 2 DEBUG nova.network.neutron [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Successfully created port: 0060f48c-30fd-4d98-810c-c9953d57fbb2 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:37:57 compute-0 nova_compute[117331]: 2025-10-09 16:37:57.969 2 DEBUG oslo_concurrency.processutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:37:57 compute-0 nova_compute[117331]: 2025-10-09 16:37:57.969 2 DEBUG oslo_concurrency.processutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.008 2 DEBUG oslo_concurrency.processutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk 1073741824" returned: 0 in 0.038s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.010 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.102s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.010 2 DEBUG oslo_concurrency.processutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.065 2 DEBUG oslo_concurrency.processutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.067 2 DEBUG nova.virt.disk.api [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Checking if we can resize image /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.068 2 DEBUG oslo_concurrency.processutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.126 2 DEBUG oslo_concurrency.processutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.128 2 DEBUG nova.virt.disk.api [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Cannot resize image /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.128 2 DEBUG nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.129 2 DEBUG nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Ensure instance console log exists: /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.130 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.130 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.131 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.478 2 DEBUG nova.network.neutron [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Successfully updated port: 0060f48c-30fd-4d98-810c-c9953d57fbb2 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.547 2 DEBUG nova.compute.manager [req-614d4472-a237-488e-bffe-2f046c39e4f0 req-4bf923ad-ad5a-4bf8-918b-ad700f91abd8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received event network-changed-0060f48c-30fd-4d98-810c-c9953d57fbb2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.547 2 DEBUG nova.compute.manager [req-614d4472-a237-488e-bffe-2f046c39e4f0 req-4bf923ad-ad5a-4bf8-918b-ad700f91abd8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Refreshing instance network info cache due to event network-changed-0060f48c-30fd-4d98-810c-c9953d57fbb2. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.548 2 DEBUG oslo_concurrency.lockutils [req-614d4472-a237-488e-bffe-2f046c39e4f0 req-4bf923ad-ad5a-4bf8-918b-ad700f91abd8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-5963f55d-5e3f-4b07-86ef-554333267b8f" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.548 2 DEBUG oslo_concurrency.lockutils [req-614d4472-a237-488e-bffe-2f046c39e4f0 req-4bf923ad-ad5a-4bf8-918b-ad700f91abd8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-5963f55d-5e3f-4b07-86ef-554333267b8f" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.549 2 DEBUG nova.network.neutron [req-614d4472-a237-488e-bffe-2f046c39e4f0 req-4bf923ad-ad5a-4bf8-918b-ad700f91abd8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Refreshing network info cache for port 0060f48c-30fd-4d98-810c-c9953d57fbb2 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:37:58 compute-0 podman[152764]: 2025-10-09 16:37:58.866648855 +0000 UTC m=+0.087787861 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4)
Oct 09 16:37:58 compute-0 nova_compute[117331]: 2025-10-09 16:37:58.984 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "refresh_cache-5963f55d-5e3f-4b07-86ef-554333267b8f" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:37:59 compute-0 nova_compute[117331]: 2025-10-09 16:37:59.055 2 WARNING neutronclient.v2_0.client [req-614d4472-a237-488e-bffe-2f046c39e4f0 req-4bf923ad-ad5a-4bf8-918b-ad700f91abd8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:37:59 compute-0 nova_compute[117331]: 2025-10-09 16:37:59.396 2 DEBUG nova.network.neutron [req-614d4472-a237-488e-bffe-2f046c39e4f0 req-4bf923ad-ad5a-4bf8-918b-ad700f91abd8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:37:59 compute-0 nova_compute[117331]: 2025-10-09 16:37:59.533 2 DEBUG nova.network.neutron [req-614d4472-a237-488e-bffe-2f046c39e4f0 req-4bf923ad-ad5a-4bf8-918b-ad700f91abd8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:37:59 compute-0 podman[127775]: time="2025-10-09T16:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:37:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:37:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3026 "" "Go-http-client/1.1"
Oct 09 16:38:00 compute-0 nova_compute[117331]: 2025-10-09 16:38:00.039 2 DEBUG oslo_concurrency.lockutils [req-614d4472-a237-488e-bffe-2f046c39e4f0 req-4bf923ad-ad5a-4bf8-918b-ad700f91abd8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-5963f55d-5e3f-4b07-86ef-554333267b8f" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:38:00 compute-0 nova_compute[117331]: 2025-10-09 16:38:00.041 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquired lock "refresh_cache-5963f55d-5e3f-4b07-86ef-554333267b8f" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:38:00 compute-0 nova_compute[117331]: 2025-10-09 16:38:00.041 2 DEBUG nova.network.neutron [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:38:01 compute-0 nova_compute[117331]: 2025-10-09 16:38:01.394 2 DEBUG nova.network.neutron [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:38:01 compute-0 openstack_network_exporter[129925]: ERROR   16:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:38:01 compute-0 openstack_network_exporter[129925]: ERROR   16:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:38:01 compute-0 openstack_network_exporter[129925]: ERROR   16:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:38:01 compute-0 openstack_network_exporter[129925]: ERROR   16:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:38:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:38:01 compute-0 openstack_network_exporter[129925]: ERROR   16:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:38:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:38:01 compute-0 nova_compute[117331]: 2025-10-09 16:38:01.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:01 compute-0 nova_compute[117331]: 2025-10-09 16:38:01.626 2 WARNING neutronclient.v2_0.client [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.005 2 DEBUG nova.network.neutron [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Updating instance_info_cache with network_info: [{"id": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "address": "fa:16:3e:36:08:54", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0060f48c-30", "ovs_interfaceid": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.520 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Releasing lock "refresh_cache-5963f55d-5e3f-4b07-86ef-554333267b8f" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.521 2 DEBUG nova.compute.manager [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Instance network_info: |[{"id": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "address": "fa:16:3e:36:08:54", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0060f48c-30", "ovs_interfaceid": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.523 2 DEBUG nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Start _get_guest_xml network_info=[{"id": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "address": "fa:16:3e:36:08:54", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0060f48c-30", "ovs_interfaceid": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.527 2 WARNING nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.528 2 DEBUG nova.virt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteWorkloadStabilizationStrategy-server-1807729568', uuid='5963f55d-5e3f-4b07-86ef-554333267b8f'), owner=OwnerMeta(userid='685b4924c5a04af7ae6f4a328bb50f14', username='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin', projectid='a60acfe52e4b4b7f912654a59f0978b7', projectname='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "address": "fa:16:3e:36:08:54", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0060f48c-30", "ovs_interfaceid": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760027882.5285022) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.532 2 DEBUG nova.virt.libvirt.host [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.533 2 DEBUG nova.virt.libvirt.host [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.536 2 DEBUG nova.virt.libvirt.host [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.536 2 DEBUG nova.virt.libvirt.host [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.537 2 DEBUG nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.537 2 DEBUG nova.virt.hardware [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.538 2 DEBUG nova.virt.hardware [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.538 2 DEBUG nova.virt.hardware [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.538 2 DEBUG nova.virt.hardware [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.538 2 DEBUG nova.virt.hardware [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.538 2 DEBUG nova.virt.hardware [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.539 2 DEBUG nova.virt.hardware [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.539 2 DEBUG nova.virt.hardware [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.539 2 DEBUG nova.virt.hardware [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.539 2 DEBUG nova.virt.hardware [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.540 2 DEBUG nova.virt.hardware [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.544 2 DEBUG nova.virt.libvirt.vif [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:37:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-1807729568',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-1807729',id=28,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a60acfe52e4b4b7f912654a59f0978b7',ramdisk_id='',reservation_id='r-hkns0u0y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader,manager',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:37:56Z,user_data=None,user_id='685b4924c5a04af7ae6f4a328bb50f14',uuid=5963f55d-5e3f-4b07-86ef-554333267b8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "address": "fa:16:3e:36:08:54", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0060f48c-30", "ovs_interfaceid": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.544 2 DEBUG nova.network.os_vif_util [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Converting VIF {"id": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "address": "fa:16:3e:36:08:54", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0060f48c-30", "ovs_interfaceid": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.545 2 DEBUG nova.network.os_vif_util [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:08:54,bridge_name='br-int',has_traffic_filtering=True,id=0060f48c-30fd-4d98-810c-c9953d57fbb2,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0060f48c-30') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:38:02 compute-0 nova_compute[117331]: 2025-10-09 16:38:02.546 2 DEBUG nova.objects.instance [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5963f55d-5e3f-4b07-86ef-554333267b8f obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.055 2 DEBUG nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:38:03 compute-0 nova_compute[117331]:   <uuid>5963f55d-5e3f-4b07-86ef-554333267b8f</uuid>
Oct 09 16:38:03 compute-0 nova_compute[117331]:   <name>instance-0000001c</name>
Oct 09 16:38:03 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:38:03 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:38:03 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadStabilizationStrategy-server-1807729568</nova:name>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:38:02</nova:creationTime>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:38:03 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:38:03 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:38:03 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:38:03 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:38:03 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:38:03 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:38:03 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:38:03 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:38:03 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:38:03 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:38:03 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:38:03 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:38:03 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:38:03 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:38:03 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:38:03 compute-0 nova_compute[117331]:         <nova:user uuid="685b4924c5a04af7ae6f4a328bb50f14">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin</nova:user>
Oct 09 16:38:03 compute-0 nova_compute[117331]:         <nova:project uuid="a60acfe52e4b4b7f912654a59f0978b7">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673</nova:project>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:38:03 compute-0 nova_compute[117331]:         <nova:port uuid="0060f48c-30fd-4d98-810c-c9953d57fbb2">
Oct 09 16:38:03 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:38:03 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:38:03 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <system>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <entry name="serial">5963f55d-5e3f-4b07-86ef-554333267b8f</entry>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <entry name="uuid">5963f55d-5e3f-4b07-86ef-554333267b8f</entry>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     </system>
Oct 09 16:38:03 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:38:03 compute-0 nova_compute[117331]:   <os>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:   </os>
Oct 09 16:38:03 compute-0 nova_compute[117331]:   <features>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:   </features>
Oct 09 16:38:03 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:38:03 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:38:03 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk.config"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:36:08:54"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <target dev="tap0060f48c-30"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/console.log" append="off"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <video>
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     </video>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:38:03 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:38:03 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:38:03 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:38:03 compute-0 nova_compute[117331]: </domain>
Oct 09 16:38:03 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.057 2 DEBUG nova.compute.manager [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Preparing to wait for external event network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.058 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.058 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.059 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.059 2 DEBUG nova.virt.libvirt.vif [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:37:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-1807729568',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-1807729',id=28,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a60acfe52e4b4b7f912654a59f0978b7',ramdisk_id='',reservation_id='r-hkns0u0y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader,manager',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:37:56Z,user_data=None,user_id='685b4924c5a04af7ae6f4a328bb50f14',uuid=5963f55d-5e3f-4b07-86ef-554333267b8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "address": "fa:16:3e:36:08:54", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0060f48c-30", "ovs_interfaceid": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.060 2 DEBUG nova.network.os_vif_util [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Converting VIF {"id": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "address": "fa:16:3e:36:08:54", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0060f48c-30", "ovs_interfaceid": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.060 2 DEBUG nova.network.os_vif_util [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:08:54,bridge_name='br-int',has_traffic_filtering=True,id=0060f48c-30fd-4d98-810c-c9953d57fbb2,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0060f48c-30') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.061 2 DEBUG os_vif [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:08:54,bridge_name='br-int',has_traffic_filtering=True,id=0060f48c-30fd-4d98-810c-c9953d57fbb2,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0060f48c-30') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.062 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.062 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.063 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '732232af-a540-551d-85a5-a686f69e7058', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.068 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0060f48c-30, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.069 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap0060f48c-30, col_values=(('qos', UUID('cc13877d-48da-4cfc-af44-8d56694e6945')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.069 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap0060f48c-30, col_values=(('external_ids', {'iface-id': '0060f48c-30fd-4d98-810c-c9953d57fbb2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:08:54', 'vm-uuid': '5963f55d-5e3f-4b07-86ef-554333267b8f'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:03 compute-0 NetworkManager[1028]: <info>  [1760027883.0714] manager: (tap0060f48c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:03 compute-0 nova_compute[117331]: 2025-10-09 16:38:03.078 2 INFO os_vif [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:08:54,bridge_name='br-int',has_traffic_filtering=True,id=0060f48c-30fd-4d98-810c-c9953d57fbb2,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0060f48c-30')
Oct 09 16:38:03 compute-0 podman[152788]: 2025-10-09 16:38:03.842565349 +0000 UTC m=+0.064498023 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 09 16:38:04 compute-0 nova_compute[117331]: 2025-10-09 16:38:04.622 2 DEBUG nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:38:04 compute-0 nova_compute[117331]: 2025-10-09 16:38:04.622 2 DEBUG nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:38:04 compute-0 nova_compute[117331]: 2025-10-09 16:38:04.623 2 DEBUG nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] No VIF found with MAC fa:16:3e:36:08:54, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:38:04 compute-0 nova_compute[117331]: 2025-10-09 16:38:04.623 2 INFO nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Using config drive
Oct 09 16:38:05 compute-0 nova_compute[117331]: 2025-10-09 16:38:05.134 2 WARNING neutronclient.v2_0.client [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:38:05 compute-0 nova_compute[117331]: 2025-10-09 16:38:05.441 2 INFO nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Creating config drive at /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk.config
Oct 09 16:38:05 compute-0 nova_compute[117331]: 2025-10-09 16:38:05.446 2 DEBUG oslo_concurrency.processutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpu4adh31x execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:38:05 compute-0 nova_compute[117331]: 2025-10-09 16:38:05.580 2 DEBUG oslo_concurrency.processutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpu4adh31x" returned: 0 in 0.134s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:38:05 compute-0 kernel: tap0060f48c-30: entered promiscuous mode
Oct 09 16:38:05 compute-0 NetworkManager[1028]: <info>  [1760027885.6542] manager: (tap0060f48c-30): new Tun device (/org/freedesktop/NetworkManager/Devices/95)
Oct 09 16:38:05 compute-0 ovn_controller[19752]: 2025-10-09T16:38:05Z|00253|binding|INFO|Claiming lport 0060f48c-30fd-4d98-810c-c9953d57fbb2 for this chassis.
Oct 09 16:38:05 compute-0 ovn_controller[19752]: 2025-10-09T16:38:05Z|00254|binding|INFO|0060f48c-30fd-4d98-810c-c9953d57fbb2: Claiming fa:16:3e:36:08:54 10.100.0.12
Oct 09 16:38:05 compute-0 nova_compute[117331]: 2025-10-09 16:38:05.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.662 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:08:54 10.100.0.12'], port_security=['fa:16:3e:36:08:54 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5963f55d-5e3f-4b07-86ef-554333267b8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a60acfe52e4b4b7f912654a59f0978b7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c47ae156-dc3a-4eb3-8eb2-126be8ba5497', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f19947fc-c2ef-4762-adfc-471bb24a038d, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=0060f48c-30fd-4d98-810c-c9953d57fbb2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.663 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 0060f48c-30fd-4d98-810c-c9953d57fbb2 in datapath cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 bound to our chassis
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.664 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.675 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[e6daf9ce-25ce-4434-911b-9ba96ab3a2f1]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.676 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcf3aa351-41 in ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.682 139687 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcf3aa351-40 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.682 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ff2bc182-a286-4859-ab9d-7a6336e966ec]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.683 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[2e862031-31b8-4584-b79a-e85852949c27]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:05 compute-0 ovn_controller[19752]: 2025-10-09T16:38:05Z|00255|binding|INFO|Setting lport 0060f48c-30fd-4d98-810c-c9953d57fbb2 ovn-installed in OVS
Oct 09 16:38:05 compute-0 ovn_controller[19752]: 2025-10-09T16:38:05Z|00256|binding|INFO|Setting lport 0060f48c-30fd-4d98-810c-c9953d57fbb2 up in Southbound
Oct 09 16:38:05 compute-0 nova_compute[117331]: 2025-10-09 16:38:05.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:05 compute-0 systemd-udevd[152830]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:38:05 compute-0 nova_compute[117331]: 2025-10-09 16:38:05.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.695 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[21eddb5e-06f7-45aa-b3fb-f25074a229bc]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.700 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[4031d70e-614f-4690-a024-d26a11ba998e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:05 compute-0 NetworkManager[1028]: <info>  [1760027885.7072] device (tap0060f48c-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:38:05 compute-0 NetworkManager[1028]: <info>  [1760027885.7082] device (tap0060f48c-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:38:05 compute-0 systemd-machined[77487]: New machine qemu-22-instance-0000001c.
Oct 09 16:38:05 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-0000001c.
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.737 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[e03cb332-d11e-49ca-b92d-54174140a3db]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:05 compute-0 NetworkManager[1028]: <info>  [1760027885.7421] manager: (tapcf3aa351-40): new Veth device (/org/freedesktop/NetworkManager/Devices/96)
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.741 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[0f6b5ba6-c0f2-4268-baa7-fbb60f341559]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.775 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[a91d72d2-61eb-4c5d-b909-7f7bec0b2264]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.778 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[8ddb1fa6-700c-4356-ac75-d85a4d870fd3]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:05 compute-0 NetworkManager[1028]: <info>  [1760027885.8099] device (tapcf3aa351-40): carrier: link connected
Oct 09 16:38:05 compute-0 nova_compute[117331]: 2025-10-09 16:38:05.815 2 DEBUG nova.compute.manager [req-8298f9af-937e-436e-953e-41d4a06a59f7 req-a02735ea-6c4f-41fe-944b-0eeb5fc521aa ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received event network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:38:05 compute-0 nova_compute[117331]: 2025-10-09 16:38:05.815 2 DEBUG oslo_concurrency.lockutils [req-8298f9af-937e-436e-953e-41d4a06a59f7 req-a02735ea-6c4f-41fe-944b-0eeb5fc521aa ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:38:05 compute-0 nova_compute[117331]: 2025-10-09 16:38:05.816 2 DEBUG oslo_concurrency.lockutils [req-8298f9af-937e-436e-953e-41d4a06a59f7 req-a02735ea-6c4f-41fe-944b-0eeb5fc521aa ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:38:05 compute-0 nova_compute[117331]: 2025-10-09 16:38:05.816 2 DEBUG oslo_concurrency.lockutils [req-8298f9af-937e-436e-953e-41d4a06a59f7 req-a02735ea-6c4f-41fe-944b-0eeb5fc521aa ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:38:05 compute-0 nova_compute[117331]: 2025-10-09 16:38:05.816 2 DEBUG nova.compute.manager [req-8298f9af-937e-436e-953e-41d4a06a59f7 req-a02735ea-6c4f-41fe-944b-0eeb5fc521aa ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Processing event network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.821 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[c7bb05c7-fd4c-4ab6-9571-a646a2b275a8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.842 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e65419-d493-4997-bdfa-78ea2e6f2d72]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf3aa351-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:10:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 282941, 'reachable_time': 24617, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 152864, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.862 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3a700f61-6571-4db6-8c68-bb1d6c0d3f85]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:102c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 282941, 'tstamp': 282941}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 152865, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.878 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[83ce09be-f4e5-4a93-8e90-c360d9028a40]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf3aa351-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:10:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 282941, 'reachable_time': 24617, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 152866, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.921 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[32cf2bd1-6e6e-4c5d-8ca2-0a4dd646aff6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.998 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a34802aa-2bb1-4cc3-83f3-fe6ef89441b9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.999 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf3aa351-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:05.999 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:06.000 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf3aa351-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:38:06 compute-0 nova_compute[117331]: 2025-10-09 16:38:06.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:06 compute-0 NetworkManager[1028]: <info>  [1760027886.0020] manager: (tapcf3aa351-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Oct 09 16:38:06 compute-0 kernel: tapcf3aa351-40: entered promiscuous mode
Oct 09 16:38:06 compute-0 nova_compute[117331]: 2025-10-09 16:38:06.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:06.005 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf3aa351-40, col_values=(('external_ids', {'iface-id': '7d2fd2fc-650d-4abc-8268-a14a8cdfd51e'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:38:06 compute-0 nova_compute[117331]: 2025-10-09 16:38:06.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:06 compute-0 ovn_controller[19752]: 2025-10-09T16:38:06Z|00257|binding|INFO|Releasing lport 7d2fd2fc-650d-4abc-8268-a14a8cdfd51e from this chassis (sb_readonly=0)
Oct 09 16:38:06 compute-0 nova_compute[117331]: 2025-10-09 16:38:06.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:06.020 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[bc84b92a-739d-4a6f-8d05-15eddb41b86d]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:06.021 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:06.021 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:06.021 28613 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:06.021 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:06.022 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e421af-aa2b-4d38-84fa-a8b564aa00eb]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:06.022 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:06.022 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[1162b355-e9cd-4813-a8a8-2b9fd98bb6e3]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:06.023 28613 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: global
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     log         /dev/log local0 debug
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     log-tag     haproxy-metadata-proxy-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     user        root
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     group       root
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     maxconn     1024
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     pidfile     /var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     daemon
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: defaults
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     log global
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     mode http
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     option httplog
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     option dontlognull
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     option http-server-close
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     option forwardfor
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     retries                 3
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     timeout http-request    30s
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     timeout connect         30s
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     timeout client          32s
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     timeout server          32s
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     timeout http-keep-alive 30s
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: listen listener
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     bind 169.254.169.254:80
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:     http-request add-header X-OVN-Network-ID cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 09 16:38:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:06.023 28613 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'env', 'PROCESS_TAG=haproxy-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 09 16:38:06 compute-0 podman[152898]: 2025-10-09 16:38:06.412027862 +0000 UTC m=+0.056867672 container create ac329dbdf5122e2af537b1cabcd723e52c3045b322dc8e533feff49f9291e1f5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, tcib_managed=true)
Oct 09 16:38:06 compute-0 systemd[1]: Started libpod-conmon-ac329dbdf5122e2af537b1cabcd723e52c3045b322dc8e533feff49f9291e1f5.scope.
Oct 09 16:38:06 compute-0 podman[152898]: 2025-10-09 16:38:06.377944823 +0000 UTC m=+0.022784663 image pull 4e15c189a95c5dcd1203c76fe304de5c8f8616da9a258ca168cc54a517f49b87 38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 09 16:38:06 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f933023e944918b30dd1d938c5d51661bb6b80909b10c9cb85c412159a998cf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 16:38:06 compute-0 podman[152898]: 2025-10-09 16:38:06.502017501 +0000 UTC m=+0.146857301 container init ac329dbdf5122e2af537b1cabcd723e52c3045b322dc8e533feff49f9291e1f5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 09 16:38:06 compute-0 podman[152898]: 2025-10-09 16:38:06.507214676 +0000 UTC m=+0.152054466 container start ac329dbdf5122e2af537b1cabcd723e52c3045b322dc8e533feff49f9291e1f5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:38:06 compute-0 neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8[152921]: [NOTICE]   (152925) : New worker (152927) forked
Oct 09 16:38:06 compute-0 neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8[152921]: [NOTICE]   (152925) : Loading success.
Oct 09 16:38:06 compute-0 nova_compute[117331]: 2025-10-09 16:38:06.878 2 DEBUG nova.compute.manager [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:38:06 compute-0 nova_compute[117331]: 2025-10-09 16:38:06.882 2 DEBUG nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:38:06 compute-0 nova_compute[117331]: 2025-10-09 16:38:06.886 2 INFO nova.virt.libvirt.driver [-] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Instance spawned successfully.
Oct 09 16:38:06 compute-0 nova_compute[117331]: 2025-10-09 16:38:06.886 2 DEBUG nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:38:07 compute-0 nova_compute[117331]: 2025-10-09 16:38:07.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:07 compute-0 nova_compute[117331]: 2025-10-09 16:38:07.400 2 DEBUG nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:38:07 compute-0 nova_compute[117331]: 2025-10-09 16:38:07.400 2 DEBUG nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:38:07 compute-0 nova_compute[117331]: 2025-10-09 16:38:07.401 2 DEBUG nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:38:07 compute-0 nova_compute[117331]: 2025-10-09 16:38:07.401 2 DEBUG nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:38:07 compute-0 nova_compute[117331]: 2025-10-09 16:38:07.402 2 DEBUG nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:38:07 compute-0 nova_compute[117331]: 2025-10-09 16:38:07.402 2 DEBUG nova.virt.libvirt.driver [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:38:07 compute-0 nova_compute[117331]: 2025-10-09 16:38:07.863 2 DEBUG nova.compute.manager [req-3f19e691-f96a-4f42-854b-e44262527b25 req-d39334d2-f071-4a25-8e18-ba645af866e4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received event network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:38:07 compute-0 nova_compute[117331]: 2025-10-09 16:38:07.864 2 DEBUG oslo_concurrency.lockutils [req-3f19e691-f96a-4f42-854b-e44262527b25 req-d39334d2-f071-4a25-8e18-ba645af866e4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:38:07 compute-0 nova_compute[117331]: 2025-10-09 16:38:07.864 2 DEBUG oslo_concurrency.lockutils [req-3f19e691-f96a-4f42-854b-e44262527b25 req-d39334d2-f071-4a25-8e18-ba645af866e4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:38:07 compute-0 nova_compute[117331]: 2025-10-09 16:38:07.864 2 DEBUG oslo_concurrency.lockutils [req-3f19e691-f96a-4f42-854b-e44262527b25 req-d39334d2-f071-4a25-8e18-ba645af866e4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:38:07 compute-0 nova_compute[117331]: 2025-10-09 16:38:07.864 2 DEBUG nova.compute.manager [req-3f19e691-f96a-4f42-854b-e44262527b25 req-d39334d2-f071-4a25-8e18-ba645af866e4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] No waiting events found dispatching network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:38:07 compute-0 nova_compute[117331]: 2025-10-09 16:38:07.864 2 WARNING nova.compute.manager [req-3f19e691-f96a-4f42-854b-e44262527b25 req-d39334d2-f071-4a25-8e18-ba645af866e4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received unexpected event network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 for instance with vm_state building and task_state spawning.
Oct 09 16:38:07 compute-0 nova_compute[117331]: 2025-10-09 16:38:07.910 2 INFO nova.compute.manager [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Took 10.07 seconds to spawn the instance on the hypervisor.
Oct 09 16:38:07 compute-0 nova_compute[117331]: 2025-10-09 16:38:07.911 2 DEBUG nova.compute.manager [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:38:08 compute-0 nova_compute[117331]: 2025-10-09 16:38:08.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:08 compute-0 nova_compute[117331]: 2025-10-09 16:38:08.439 2 INFO nova.compute.manager [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Took 15.29 seconds to build instance.
Oct 09 16:38:08 compute-0 nova_compute[117331]: 2025-10-09 16:38:08.945 2 DEBUG oslo_concurrency.lockutils [None req-08ea4e12-77f1-4845-8b4e-dab1be785597 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.811s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:38:09 compute-0 podman[152937]: 2025-10-09 16:38:09.84707365 +0000 UTC m=+0.067481188 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4)
Oct 09 16:38:09 compute-0 podman[152936]: 2025-10-09 16:38:09.853200914 +0000 UTC m=+0.072928520 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 09 16:38:12 compute-0 nova_compute[117331]: 2025-10-09 16:38:12.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:12 compute-0 nova_compute[117331]: 2025-10-09 16:38:12.747 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "56389b25-6fdb-41ad-ab62-b6872a816180" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:38:12 compute-0 nova_compute[117331]: 2025-10-09 16:38:12.747 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:38:13 compute-0 nova_compute[117331]: 2025-10-09 16:38:13.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:13 compute-0 nova_compute[117331]: 2025-10-09 16:38:13.252 2 DEBUG nova.compute.manager [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:38:13 compute-0 nova_compute[117331]: 2025-10-09 16:38:13.803 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:38:13 compute-0 nova_compute[117331]: 2025-10-09 16:38:13.803 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:38:13 compute-0 nova_compute[117331]: 2025-10-09 16:38:13.815 2 DEBUG nova.virt.hardware [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:38:13 compute-0 nova_compute[117331]: 2025-10-09 16:38:13.816 2 INFO nova.compute.claims [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:38:15 compute-0 nova_compute[117331]: 2025-10-09 16:38:15.028 2 DEBUG nova.compute.provider_tree [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:38:15 compute-0 nova_compute[117331]: 2025-10-09 16:38:15.535 2 DEBUG nova.scheduler.client.report [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:38:16 compute-0 nova_compute[117331]: 2025-10-09 16:38:16.044 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.240s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:38:16 compute-0 nova_compute[117331]: 2025-10-09 16:38:16.045 2 DEBUG nova.compute.manager [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:38:16 compute-0 nova_compute[117331]: 2025-10-09 16:38:16.557 2 DEBUG nova.compute.manager [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:38:16 compute-0 nova_compute[117331]: 2025-10-09 16:38:16.558 2 DEBUG nova.network.neutron [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:38:16 compute-0 nova_compute[117331]: 2025-10-09 16:38:16.558 2 WARNING neutronclient.v2_0.client [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:38:16 compute-0 nova_compute[117331]: 2025-10-09 16:38:16.558 2 WARNING neutronclient.v2_0.client [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:38:17 compute-0 nova_compute[117331]: 2025-10-09 16:38:17.070 2 INFO nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:38:17 compute-0 nova_compute[117331]: 2025-10-09 16:38:17.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:17 compute-0 nova_compute[117331]: 2025-10-09 16:38:17.582 2 DEBUG nova.compute.manager [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:18 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:18.416 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:38:18 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:18.417 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.564 2 DEBUG nova.network.neutron [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Successfully created port: 50e8afe7-f8b4-4c50-b99a-5c611b86fe7e _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.601 2 DEBUG nova.compute.manager [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.602 2 DEBUG nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.603 2 INFO nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Creating image(s)
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.603 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "/var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.604 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "/var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.604 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "/var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.605 2 DEBUG oslo_utils.imageutils.format_inspector [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.609 2 DEBUG oslo_utils.imageutils.format_inspector [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.610 2 DEBUG oslo_concurrency.processutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.680 2 DEBUG oslo_concurrency.processutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.681 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.682 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.683 2 DEBUG oslo_utils.imageutils.format_inspector [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.686 2 DEBUG oslo_utils.imageutils.format_inspector [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.686 2 DEBUG oslo_concurrency.processutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.735 2 DEBUG oslo_concurrency.processutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.737 2 DEBUG oslo_concurrency.processutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.772 2 DEBUG oslo_concurrency.processutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk 1073741824" returned: 0 in 0.035s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.773 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.091s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.774 2 DEBUG oslo_concurrency.processutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.826 2 DEBUG oslo_concurrency.processutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.827 2 DEBUG nova.virt.disk.api [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Checking if we can resize image /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.827 2 DEBUG oslo_concurrency.processutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:38:18 compute-0 podman[152996]: 2025-10-09 16:38:18.865039991 +0000 UTC m=+0.090756415 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, version=9.6, architecture=x86_64, vcs-type=git, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41)
Oct 09 16:38:18 compute-0 podman[152999]: 2025-10-09 16:38:18.879301642 +0000 UTC m=+0.095644679 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4)
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.883 2 DEBUG oslo_concurrency.processutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.884 2 DEBUG nova.virt.disk.api [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Cannot resize image /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.885 2 DEBUG nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.886 2 DEBUG nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Ensure instance console log exists: /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.887 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.888 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:38:18 compute-0 nova_compute[117331]: 2025-10-09 16:38:18.889 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:38:19 compute-0 ovn_controller[19752]: 2025-10-09T16:38:19Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:08:54 10.100.0.12
Oct 09 16:38:19 compute-0 ovn_controller[19752]: 2025-10-09T16:38:19Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:08:54 10.100.0.12
Oct 09 16:38:20 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:20.419 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:38:20 compute-0 nova_compute[117331]: 2025-10-09 16:38:20.535 2 DEBUG nova.network.neutron [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Successfully updated port: 50e8afe7-f8b4-4c50-b99a-5c611b86fe7e _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:38:20 compute-0 nova_compute[117331]: 2025-10-09 16:38:20.591 2 DEBUG nova.compute.manager [req-8ce41f99-b88d-404c-9b20-e3ff97520b6a req-22575b36-c38f-4b36-896e-156a32467c27 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Received event network-changed-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:38:20 compute-0 nova_compute[117331]: 2025-10-09 16:38:20.592 2 DEBUG nova.compute.manager [req-8ce41f99-b88d-404c-9b20-e3ff97520b6a req-22575b36-c38f-4b36-896e-156a32467c27 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Refreshing instance network info cache due to event network-changed-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:38:20 compute-0 nova_compute[117331]: 2025-10-09 16:38:20.592 2 DEBUG oslo_concurrency.lockutils [req-8ce41f99-b88d-404c-9b20-e3ff97520b6a req-22575b36-c38f-4b36-896e-156a32467c27 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-56389b25-6fdb-41ad-ab62-b6872a816180" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:38:20 compute-0 nova_compute[117331]: 2025-10-09 16:38:20.592 2 DEBUG oslo_concurrency.lockutils [req-8ce41f99-b88d-404c-9b20-e3ff97520b6a req-22575b36-c38f-4b36-896e-156a32467c27 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-56389b25-6fdb-41ad-ab62-b6872a816180" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:38:20 compute-0 nova_compute[117331]: 2025-10-09 16:38:20.592 2 DEBUG nova.network.neutron [req-8ce41f99-b88d-404c-9b20-e3ff97520b6a req-22575b36-c38f-4b36-896e-156a32467c27 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Refreshing network info cache for port 50e8afe7-f8b4-4c50-b99a-5c611b86fe7e _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:38:21 compute-0 nova_compute[117331]: 2025-10-09 16:38:21.042 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "refresh_cache-56389b25-6fdb-41ad-ab62-b6872a816180" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:38:21 compute-0 nova_compute[117331]: 2025-10-09 16:38:21.098 2 WARNING neutronclient.v2_0.client [req-8ce41f99-b88d-404c-9b20-e3ff97520b6a req-22575b36-c38f-4b36-896e-156a32467c27 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:38:21 compute-0 nova_compute[117331]: 2025-10-09 16:38:21.410 2 DEBUG nova.network.neutron [req-8ce41f99-b88d-404c-9b20-e3ff97520b6a req-22575b36-c38f-4b36-896e-156a32467c27 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:38:21 compute-0 nova_compute[117331]: 2025-10-09 16:38:21.560 2 DEBUG nova.network.neutron [req-8ce41f99-b88d-404c-9b20-e3ff97520b6a req-22575b36-c38f-4b36-896e-156a32467c27 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:38:22 compute-0 nova_compute[117331]: 2025-10-09 16:38:22.066 2 DEBUG oslo_concurrency.lockutils [req-8ce41f99-b88d-404c-9b20-e3ff97520b6a req-22575b36-c38f-4b36-896e-156a32467c27 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-56389b25-6fdb-41ad-ab62-b6872a816180" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:38:22 compute-0 nova_compute[117331]: 2025-10-09 16:38:22.067 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquired lock "refresh_cache-56389b25-6fdb-41ad-ab62-b6872a816180" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:38:22 compute-0 nova_compute[117331]: 2025-10-09 16:38:22.067 2 DEBUG nova.network.neutron [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:38:22 compute-0 nova_compute[117331]: 2025-10-09 16:38:22.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:22 compute-0 nova_compute[117331]: 2025-10-09 16:38:22.843 2 DEBUG nova.network.neutron [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.075 2 WARNING neutronclient.v2_0.client [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.237 2 DEBUG nova.network.neutron [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Updating instance_info_cache with network_info: [{"id": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "address": "fa:16:3e:48:bc:99", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8afe7-f8", "ovs_interfaceid": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.744 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Releasing lock "refresh_cache-56389b25-6fdb-41ad-ab62-b6872a816180" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.745 2 DEBUG nova.compute.manager [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Instance network_info: |[{"id": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "address": "fa:16:3e:48:bc:99", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8afe7-f8", "ovs_interfaceid": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.749 2 DEBUG nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Start _get_guest_xml network_info=[{"id": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "address": "fa:16:3e:48:bc:99", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8afe7-f8", "ovs_interfaceid": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.755 2 WARNING nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.756 2 DEBUG nova.virt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteWorkloadStabilizationStrategy-server-623877253', uuid='56389b25-6fdb-41ad-ab62-b6872a816180'), owner=OwnerMeta(userid='685b4924c5a04af7ae6f4a328bb50f14', username='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin', projectid='a60acfe52e4b4b7f912654a59f0978b7', projectname='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "address": "fa:16:3e:48:bc:99", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8afe7-f8", "ovs_interfaceid": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760027903.7565033) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.761 2 DEBUG nova.virt.libvirt.host [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.761 2 DEBUG nova.virt.libvirt.host [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.764 2 DEBUG nova.virt.libvirt.host [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.765 2 DEBUG nova.virt.libvirt.host [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.765 2 DEBUG nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.765 2 DEBUG nova.virt.hardware [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.766 2 DEBUG nova.virt.hardware [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.766 2 DEBUG nova.virt.hardware [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.766 2 DEBUG nova.virt.hardware [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.766 2 DEBUG nova.virt.hardware [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.767 2 DEBUG nova.virt.hardware [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.767 2 DEBUG nova.virt.hardware [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.768 2 DEBUG nova.virt.hardware [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.768 2 DEBUG nova.virt.hardware [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.768 2 DEBUG nova.virt.hardware [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.769 2 DEBUG nova.virt.hardware [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.773 2 DEBUG nova.virt.libvirt.vif [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:38:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-623877253',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-6238772',id=29,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a60acfe52e4b4b7f912654a59f0978b7',ramdisk_id='',reservation_id='r-zwyn0omz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader,manager',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:38:17Z,user_data=None,user_id='685b4924c5a04af7ae6f4a328bb50f14',uuid=56389b25-6fdb-41ad-ab62-b6872a816180,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "address": "fa:16:3e:48:bc:99", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8afe7-f8", "ovs_interfaceid": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.774 2 DEBUG nova.network.os_vif_util [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Converting VIF {"id": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "address": "fa:16:3e:48:bc:99", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8afe7-f8", "ovs_interfaceid": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.775 2 DEBUG nova.network.os_vif_util [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:bc:99,bridge_name='br-int',has_traffic_filtering=True,id=50e8afe7-f8b4-4c50-b99a-5c611b86fe7e,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8afe7-f8') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:38:23 compute-0 nova_compute[117331]: 2025-10-09 16:38:23.775 2 DEBUG nova.objects.instance [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 56389b25-6fdb-41ad-ab62-b6872a816180 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.283 2 DEBUG nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:38:24 compute-0 nova_compute[117331]:   <uuid>56389b25-6fdb-41ad-ab62-b6872a816180</uuid>
Oct 09 16:38:24 compute-0 nova_compute[117331]:   <name>instance-0000001d</name>
Oct 09 16:38:24 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:38:24 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:38:24 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadStabilizationStrategy-server-623877253</nova:name>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:38:23</nova:creationTime>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:38:24 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:38:24 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:38:24 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:38:24 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:38:24 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:38:24 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:38:24 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:38:24 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:38:24 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:38:24 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:38:24 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:38:24 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:38:24 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:38:24 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:38:24 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:38:24 compute-0 nova_compute[117331]:         <nova:user uuid="685b4924c5a04af7ae6f4a328bb50f14">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin</nova:user>
Oct 09 16:38:24 compute-0 nova_compute[117331]:         <nova:project uuid="a60acfe52e4b4b7f912654a59f0978b7">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673</nova:project>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:38:24 compute-0 nova_compute[117331]:         <nova:port uuid="50e8afe7-f8b4-4c50-b99a-5c611b86fe7e">
Oct 09 16:38:24 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:38:24 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:38:24 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <system>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <entry name="serial">56389b25-6fdb-41ad-ab62-b6872a816180</entry>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <entry name="uuid">56389b25-6fdb-41ad-ab62-b6872a816180</entry>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     </system>
Oct 09 16:38:24 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:38:24 compute-0 nova_compute[117331]:   <os>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:   </os>
Oct 09 16:38:24 compute-0 nova_compute[117331]:   <features>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:   </features>
Oct 09 16:38:24 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:38:24 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:38:24 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk.config"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:48:bc:99"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <target dev="tap50e8afe7-f8"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/console.log" append="off"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <video>
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     </video>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:38:24 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:38:24 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:38:24 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:38:24 compute-0 nova_compute[117331]: </domain>
Oct 09 16:38:24 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.286 2 DEBUG nova.compute.manager [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Preparing to wait for external event network-vif-plugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.286 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Acquiring lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.286 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.287 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.288 2 DEBUG nova.virt.libvirt.vif [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:38:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-623877253',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-6238772',id=29,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a60acfe52e4b4b7f912654a59f0978b7',ramdisk_id='',reservation_id='r-zwyn0omz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader,manager',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:38:17Z,user_data=None,user_id='685b4924c5a04af7ae6f4a328bb50f14',uuid=56389b25-6fdb-41ad-ab62-b6872a816180,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "address": "fa:16:3e:48:bc:99", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8afe7-f8", "ovs_interfaceid": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.288 2 DEBUG nova.network.os_vif_util [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Converting VIF {"id": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "address": "fa:16:3e:48:bc:99", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8afe7-f8", "ovs_interfaceid": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.289 2 DEBUG nova.network.os_vif_util [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:bc:99,bridge_name='br-int',has_traffic_filtering=True,id=50e8afe7-f8b4-4c50-b99a-5c611b86fe7e,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8afe7-f8') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.290 2 DEBUG os_vif [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:bc:99,bridge_name='br-int',has_traffic_filtering=True,id=50e8afe7-f8b4-4c50-b99a-5c611b86fe7e,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8afe7-f8') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.291 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.292 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.293 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'a703f1ca-edba-53a8-8394-cbeb6d7571b6', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.301 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50e8afe7-f8, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.301 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap50e8afe7-f8, col_values=(('qos', UUID('13cc15f9-ef9d-427c-8705-63a2d0220485')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.302 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap50e8afe7-f8, col_values=(('external_ids', {'iface-id': '50e8afe7-f8b4-4c50-b99a-5c611b86fe7e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:bc:99', 'vm-uuid': '56389b25-6fdb-41ad-ab62-b6872a816180'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:24 compute-0 NetworkManager[1028]: <info>  [1760027904.3057] manager: (tap50e8afe7-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:24 compute-0 nova_compute[117331]: 2025-10-09 16:38:24.315 2 INFO os_vif [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:bc:99,bridge_name='br-int',has_traffic_filtering=True,id=50e8afe7-f8b4-4c50-b99a-5c611b86fe7e,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8afe7-f8')
Oct 09 16:38:25 compute-0 nova_compute[117331]: 2025-10-09 16:38:25.866 2 DEBUG nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:38:25 compute-0 nova_compute[117331]: 2025-10-09 16:38:25.867 2 DEBUG nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:38:25 compute-0 nova_compute[117331]: 2025-10-09 16:38:25.867 2 DEBUG nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] No VIF found with MAC fa:16:3e:48:bc:99, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:38:25 compute-0 nova_compute[117331]: 2025-10-09 16:38:25.867 2 INFO nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Using config drive
Oct 09 16:38:26 compute-0 nova_compute[117331]: 2025-10-09 16:38:26.379 2 WARNING neutronclient.v2_0.client [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:38:26 compute-0 nova_compute[117331]: 2025-10-09 16:38:26.530 2 INFO nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Creating config drive at /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk.config
Oct 09 16:38:26 compute-0 nova_compute[117331]: 2025-10-09 16:38:26.536 2 DEBUG oslo_concurrency.processutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpebjtvdct execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:38:26 compute-0 nova_compute[117331]: 2025-10-09 16:38:26.668 2 DEBUG oslo_concurrency.processutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpebjtvdct" returned: 0 in 0.132s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:38:26 compute-0 kernel: tap50e8afe7-f8: entered promiscuous mode
Oct 09 16:38:26 compute-0 NetworkManager[1028]: <info>  [1760027906.7334] manager: (tap50e8afe7-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/99)
Oct 09 16:38:26 compute-0 nova_compute[117331]: 2025-10-09 16:38:26.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:26 compute-0 ovn_controller[19752]: 2025-10-09T16:38:26Z|00258|binding|INFO|Claiming lport 50e8afe7-f8b4-4c50-b99a-5c611b86fe7e for this chassis.
Oct 09 16:38:26 compute-0 ovn_controller[19752]: 2025-10-09T16:38:26Z|00259|binding|INFO|50e8afe7-f8b4-4c50-b99a-5c611b86fe7e: Claiming fa:16:3e:48:bc:99 10.100.0.11
Oct 09 16:38:26 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:26.742 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:bc:99 10.100.0.11'], port_security=['fa:16:3e:48:bc:99 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '56389b25-6fdb-41ad-ab62-b6872a816180', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a60acfe52e4b4b7f912654a59f0978b7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c47ae156-dc3a-4eb3-8eb2-126be8ba5497', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f19947fc-c2ef-4762-adfc-471bb24a038d, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=50e8afe7-f8b4-4c50-b99a-5c611b86fe7e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:38:26 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:26.743 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 50e8afe7-f8b4-4c50-b99a-5c611b86fe7e in datapath cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 bound to our chassis
Oct 09 16:38:26 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:26.744 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8
Oct 09 16:38:26 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:26.765 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f6f921-8eee-4166-b2d8-29498f32f9dd]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:26 compute-0 ovn_controller[19752]: 2025-10-09T16:38:26Z|00260|binding|INFO|Setting lport 50e8afe7-f8b4-4c50-b99a-5c611b86fe7e ovn-installed in OVS
Oct 09 16:38:26 compute-0 ovn_controller[19752]: 2025-10-09T16:38:26Z|00261|binding|INFO|Setting lport 50e8afe7-f8b4-4c50-b99a-5c611b86fe7e up in Southbound
Oct 09 16:38:26 compute-0 nova_compute[117331]: 2025-10-09 16:38:26.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:26 compute-0 systemd-udevd[153069]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:38:26 compute-0 nova_compute[117331]: 2025-10-09 16:38:26.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:26 compute-0 NetworkManager[1028]: <info>  [1760027906.7865] device (tap50e8afe7-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:38:26 compute-0 NetworkManager[1028]: <info>  [1760027906.7886] device (tap50e8afe7-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:38:26 compute-0 systemd-machined[77487]: New machine qemu-23-instance-0000001d.
Oct 09 16:38:26 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:26.812 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[42868059-bf6b-4eac-a62e-bec396d03082]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:26 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-0000001d.
Oct 09 16:38:26 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:26.814 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[33210151-cd48-46f6-876a-68a03ba2af3a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:26 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:26.844 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[226d32f4-ad0f-4297-bd84-9689b464d5e8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:26 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:26.863 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[80f95d5a-fad6-44b9-85d7-05c93c6a13a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf3aa351-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:10:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 282941, 'reachable_time': 24617, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 153080, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:26 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:26.880 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[674e6644-2a9d-449f-9005-427f83d6de27]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcf3aa351-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 282955, 'tstamp': 282955}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 153085, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcf3aa351-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 282959, 'tstamp': 282959}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 153085, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:26 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:26.881 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf3aa351-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:38:26 compute-0 nova_compute[117331]: 2025-10-09 16:38:26.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:26 compute-0 nova_compute[117331]: 2025-10-09 16:38:26.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:26 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:26.883 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf3aa351-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:38:26 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:26.884 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:38:26 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:26.884 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf3aa351-40, col_values=(('external_ids', {'iface-id': '7d2fd2fc-650d-4abc-8268-a14a8cdfd51e'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:38:26 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:26.884 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:38:26 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:26.886 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[1960e863-286f-4121-afe0-dbaa344bc745]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:38:27 compute-0 nova_compute[117331]: 2025-10-09 16:38:27.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:27 compute-0 nova_compute[117331]: 2025-10-09 16:38:27.511 2 DEBUG nova.compute.manager [req-f0a19907-82cb-4233-88ab-67fdb545bb40 req-a1fc2177-ba04-4315-ae54-49186ca71d85 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Received event network-vif-plugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:38:27 compute-0 nova_compute[117331]: 2025-10-09 16:38:27.511 2 DEBUG oslo_concurrency.lockutils [req-f0a19907-82cb-4233-88ab-67fdb545bb40 req-a1fc2177-ba04-4315-ae54-49186ca71d85 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:38:27 compute-0 nova_compute[117331]: 2025-10-09 16:38:27.512 2 DEBUG oslo_concurrency.lockutils [req-f0a19907-82cb-4233-88ab-67fdb545bb40 req-a1fc2177-ba04-4315-ae54-49186ca71d85 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:38:27 compute-0 nova_compute[117331]: 2025-10-09 16:38:27.512 2 DEBUG oslo_concurrency.lockutils [req-f0a19907-82cb-4233-88ab-67fdb545bb40 req-a1fc2177-ba04-4315-ae54-49186ca71d85 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:38:27 compute-0 nova_compute[117331]: 2025-10-09 16:38:27.512 2 DEBUG nova.compute.manager [req-f0a19907-82cb-4233-88ab-67fdb545bb40 req-a1fc2177-ba04-4315-ae54-49186ca71d85 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Processing event network-vif-plugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:38:27 compute-0 nova_compute[117331]: 2025-10-09 16:38:27.564 2 DEBUG nova.compute.manager [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:38:27 compute-0 nova_compute[117331]: 2025-10-09 16:38:27.569 2 DEBUG nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:38:27 compute-0 nova_compute[117331]: 2025-10-09 16:38:27.572 2 INFO nova.virt.libvirt.driver [-] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Instance spawned successfully.
Oct 09 16:38:27 compute-0 nova_compute[117331]: 2025-10-09 16:38:27.573 2 DEBUG nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:38:28 compute-0 nova_compute[117331]: 2025-10-09 16:38:28.085 2 DEBUG nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:38:28 compute-0 nova_compute[117331]: 2025-10-09 16:38:28.085 2 DEBUG nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:38:28 compute-0 nova_compute[117331]: 2025-10-09 16:38:28.086 2 DEBUG nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:38:28 compute-0 nova_compute[117331]: 2025-10-09 16:38:28.086 2 DEBUG nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:38:28 compute-0 nova_compute[117331]: 2025-10-09 16:38:28.087 2 DEBUG nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:38:28 compute-0 nova_compute[117331]: 2025-10-09 16:38:28.087 2 DEBUG nova.virt.libvirt.driver [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:38:28 compute-0 nova_compute[117331]: 2025-10-09 16:38:28.600 2 INFO nova.compute.manager [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Took 10.00 seconds to spawn the instance on the hypervisor.
Oct 09 16:38:28 compute-0 nova_compute[117331]: 2025-10-09 16:38:28.601 2 DEBUG nova.compute.manager [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:38:29 compute-0 nova_compute[117331]: 2025-10-09 16:38:29.134 2 INFO nova.compute.manager [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Took 15.38 seconds to build instance.
Oct 09 16:38:29 compute-0 nova_compute[117331]: 2025-10-09 16:38:29.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:29 compute-0 nova_compute[117331]: 2025-10-09 16:38:29.569 2 DEBUG nova.compute.manager [req-0908af5a-4480-4ce5-a42c-e62105fd8d0d req-d8592da3-eca9-4975-9557-38f5f87c1a27 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Received event network-vif-plugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:38:29 compute-0 nova_compute[117331]: 2025-10-09 16:38:29.570 2 DEBUG oslo_concurrency.lockutils [req-0908af5a-4480-4ce5-a42c-e62105fd8d0d req-d8592da3-eca9-4975-9557-38f5f87c1a27 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:38:29 compute-0 nova_compute[117331]: 2025-10-09 16:38:29.570 2 DEBUG oslo_concurrency.lockutils [req-0908af5a-4480-4ce5-a42c-e62105fd8d0d req-d8592da3-eca9-4975-9557-38f5f87c1a27 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:38:29 compute-0 nova_compute[117331]: 2025-10-09 16:38:29.570 2 DEBUG oslo_concurrency.lockutils [req-0908af5a-4480-4ce5-a42c-e62105fd8d0d req-d8592da3-eca9-4975-9557-38f5f87c1a27 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:38:29 compute-0 nova_compute[117331]: 2025-10-09 16:38:29.570 2 DEBUG nova.compute.manager [req-0908af5a-4480-4ce5-a42c-e62105fd8d0d req-d8592da3-eca9-4975-9557-38f5f87c1a27 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] No waiting events found dispatching network-vif-plugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:38:29 compute-0 nova_compute[117331]: 2025-10-09 16:38:29.571 2 WARNING nova.compute.manager [req-0908af5a-4480-4ce5-a42c-e62105fd8d0d req-d8592da3-eca9-4975-9557-38f5f87c1a27 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Received unexpected event network-vif-plugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e for instance with vm_state active and task_state None.
Oct 09 16:38:29 compute-0 nova_compute[117331]: 2025-10-09 16:38:29.640 2 DEBUG oslo_concurrency.lockutils [None req-92e9cdf9-ad12-4559-b171-00e0a82ffffa 685b4924c5a04af7ae6f4a328bb50f14 a60acfe52e4b4b7f912654a59f0978b7 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.893s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:38:29 compute-0 podman[127775]: time="2025-10-09T16:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:38:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:38:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3495 "" "Go-http-client/1.1"
Oct 09 16:38:29 compute-0 podman[153095]: 2025-10-09 16:38:29.849705499 +0000 UTC m=+0.078206937 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251007, tcib_managed=true, config_id=multipathd)
Oct 09 16:38:31 compute-0 openstack_network_exporter[129925]: ERROR   16:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:38:31 compute-0 openstack_network_exporter[129925]: ERROR   16:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:38:31 compute-0 openstack_network_exporter[129925]: ERROR   16:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:38:31 compute-0 openstack_network_exporter[129925]: ERROR   16:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:38:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:38:31 compute-0 openstack_network_exporter[129925]: ERROR   16:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:38:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:38:31 compute-0 nova_compute[117331]: 2025-10-09 16:38:31.812 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:38:31 compute-0 nova_compute[117331]: 2025-10-09 16:38:31.813 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:38:32 compute-0 nova_compute[117331]: 2025-10-09 16:38:32.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:34 compute-0 nova_compute[117331]: 2025-10-09 16:38:34.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:34 compute-0 podman[153117]: 2025-10-09 16:38:34.817042642 +0000 UTC m=+0.050745518 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:38:35 compute-0 nova_compute[117331]: 2025-10-09 16:38:35.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:38:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:35.335 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:38:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:35.336 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:38:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:38:35.336 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:38:35 compute-0 nova_compute[117331]: 2025-10-09 16:38:35.819 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:38:35 compute-0 nova_compute[117331]: 2025-10-09 16:38:35.821 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:38:35 compute-0 nova_compute[117331]: 2025-10-09 16:38:35.821 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:38:35 compute-0 nova_compute[117331]: 2025-10-09 16:38:35.821 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:38:36 compute-0 nova_compute[117331]: 2025-10-09 16:38:36.866 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:38:36 compute-0 nova_compute[117331]: 2025-10-09 16:38:36.926 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:38:36 compute-0 nova_compute[117331]: 2025-10-09 16:38:36.928 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:38:36 compute-0 nova_compute[117331]: 2025-10-09 16:38:36.980 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:38:36 compute-0 nova_compute[117331]: 2025-10-09 16:38:36.987 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:38:37 compute-0 nova_compute[117331]: 2025-10-09 16:38:37.056 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:38:37 compute-0 nova_compute[117331]: 2025-10-09 16:38:37.057 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:38:37 compute-0 nova_compute[117331]: 2025-10-09 16:38:37.132 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:38:37 compute-0 nova_compute[117331]: 2025-10-09 16:38:37.284 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:38:37 compute-0 nova_compute[117331]: 2025-10-09 16:38:37.285 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:38:37 compute-0 nova_compute[117331]: 2025-10-09 16:38:37.325 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.040s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:38:37 compute-0 nova_compute[117331]: 2025-10-09 16:38:37.326 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5817MB free_disk=73.21989059448242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:38:37 compute-0 nova_compute[117331]: 2025-10-09 16:38:37.326 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:38:37 compute-0 nova_compute[117331]: 2025-10-09 16:38:37.326 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:38:37 compute-0 nova_compute[117331]: 2025-10-09 16:38:37.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:38 compute-0 nova_compute[117331]: 2025-10-09 16:38:38.410 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance 5963f55d-5e3f-4b07-86ef-554333267b8f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:38:38 compute-0 nova_compute[117331]: 2025-10-09 16:38:38.412 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance 56389b25-6fdb-41ad-ab62-b6872a816180 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:38:38 compute-0 nova_compute[117331]: 2025-10-09 16:38:38.412 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:38:38 compute-0 nova_compute[117331]: 2025-10-09 16:38:38.413 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:38:37 up 47 min,  0 user,  load average: 0.65, 0.54, 0.48\n', 'num_instances': '2', 'num_vm_active': '2', 'num_task_None': '2', 'num_os_type_None': '2', 'num_proj_a60acfe52e4b4b7f912654a59f0978b7': '2', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:38:38 compute-0 ovn_controller[19752]: 2025-10-09T16:38:38Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:bc:99 10.100.0.11
Oct 09 16:38:38 compute-0 ovn_controller[19752]: 2025-10-09T16:38:38Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:bc:99 10.100.0.11
Oct 09 16:38:38 compute-0 nova_compute[117331]: 2025-10-09 16:38:38.812 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:38:39 compute-0 nova_compute[117331]: 2025-10-09 16:38:39.319 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:38:39 compute-0 nova_compute[117331]: 2025-10-09 16:38:39.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:39 compute-0 nova_compute[117331]: 2025-10-09 16:38:39.829 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:38:39 compute-0 nova_compute[117331]: 2025-10-09 16:38:39.830 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.503s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:38:40 compute-0 podman[153167]: 2025-10-09 16:38:40.823289867 +0000 UTC m=+0.052518073 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct 09 16:38:40 compute-0 podman[153168]: 2025-10-09 16:38:40.824156335 +0000 UTC m=+0.051860293 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:38:40 compute-0 nova_compute[117331]: 2025-10-09 16:38:40.826 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:38:40 compute-0 nova_compute[117331]: 2025-10-09 16:38:40.826 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:38:40 compute-0 nova_compute[117331]: 2025-10-09 16:38:40.826 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:38:40 compute-0 nova_compute[117331]: 2025-10-09 16:38:40.827 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:38:41 compute-0 nova_compute[117331]: 2025-10-09 16:38:41.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:38:42 compute-0 nova_compute[117331]: 2025-10-09 16:38:42.302 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:38:42 compute-0 nova_compute[117331]: 2025-10-09 16:38:42.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:43 compute-0 nova_compute[117331]: 2025-10-09 16:38:43.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:38:44 compute-0 nova_compute[117331]: 2025-10-09 16:38:44.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:47 compute-0 nova_compute[117331]: 2025-10-09 16:38:47.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:49 compute-0 nova_compute[117331]: 2025-10-09 16:38:49.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:49 compute-0 podman[153202]: 2025-10-09 16:38:49.830202168 +0000 UTC m=+0.068590662 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Oct 09 16:38:49 compute-0 podman[153203]: 2025-10-09 16:38:49.899666847 +0000 UTC m=+0.124009837 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=watcher_latest, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 09 16:38:52 compute-0 nova_compute[117331]: 2025-10-09 16:38:52.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:54 compute-0 nova_compute[117331]: 2025-10-09 16:38:54.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:56 compute-0 nova_compute[117331]: 2025-10-09 16:38:56.603 2 DEBUG nova.virt.libvirt.driver [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Check if temp file /var/lib/nova/instances/tmp9ok_jhlg exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Oct 09 16:38:56 compute-0 nova_compute[117331]: 2025-10-09 16:38:56.608 2 DEBUG nova.compute.manager [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp9ok_jhlg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='5963f55d-5e3f-4b07-86ef-554333267b8f',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Oct 09 16:38:56 compute-0 nova_compute[117331]: 2025-10-09 16:38:56.797 2 DEBUG nova.virt.libvirt.driver [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Check if temp file /var/lib/nova/instances/tmpehauqfuv exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Oct 09 16:38:56 compute-0 nova_compute[117331]: 2025-10-09 16:38:56.801 2 DEBUG nova.compute.manager [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpehauqfuv',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='56389b25-6fdb-41ad-ab62-b6872a816180',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Oct 09 16:38:56 compute-0 ovn_controller[19752]: 2025-10-09T16:38:56Z|00262|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Oct 09 16:38:57 compute-0 nova_compute[117331]: 2025-10-09 16:38:57.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:59 compute-0 nova_compute[117331]: 2025-10-09 16:38:59.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:38:59 compute-0 podman[127775]: time="2025-10-09T16:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:38:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:38:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3497 "" "Go-http-client/1.1"
Oct 09 16:39:00 compute-0 podman[153249]: 2025-10-09 16:39:00.826925249 +0000 UTC m=+0.052856234 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Oct 09 16:39:01 compute-0 openstack_network_exporter[129925]: ERROR   16:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:39:01 compute-0 openstack_network_exporter[129925]: ERROR   16:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:39:01 compute-0 openstack_network_exporter[129925]: ERROR   16:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:39:01 compute-0 openstack_network_exporter[129925]: ERROR   16:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:39:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:39:01 compute-0 openstack_network_exporter[129925]: ERROR   16:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:39:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:39:01 compute-0 nova_compute[117331]: 2025-10-09 16:39:01.700 2 DEBUG oslo_concurrency.processutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:39:01 compute-0 nova_compute[117331]: 2025-10-09 16:39:01.764 2 DEBUG oslo_concurrency.processutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:39:01 compute-0 nova_compute[117331]: 2025-10-09 16:39:01.765 2 DEBUG oslo_concurrency.processutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:39:01 compute-0 nova_compute[117331]: 2025-10-09 16:39:01.823 2 DEBUG oslo_concurrency.processutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:39:01 compute-0 nova_compute[117331]: 2025-10-09 16:39:01.824 2 DEBUG nova.compute.manager [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Preparing to wait for external event network-vif-plugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:39:01 compute-0 nova_compute[117331]: 2025-10-09 16:39:01.824 2 DEBUG oslo_concurrency.lockutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:01 compute-0 nova_compute[117331]: 2025-10-09 16:39:01.824 2 DEBUG oslo_concurrency.lockutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:01 compute-0 nova_compute[117331]: 2025-10-09 16:39:01.825 2 DEBUG oslo_concurrency.lockutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:01 compute-0 anacron[106873]: Job `cron.weekly' started
Oct 09 16:39:01 compute-0 anacron[106873]: Job `cron.weekly' terminated
Oct 09 16:39:02 compute-0 nova_compute[117331]: 2025-10-09 16:39:02.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:04 compute-0 nova_compute[117331]: 2025-10-09 16:39:04.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:05 compute-0 podman[153278]: 2025-10-09 16:39:05.823679183 +0000 UTC m=+0.061323463 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:39:07 compute-0 nova_compute[117331]: 2025-10-09 16:39:07.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:07 compute-0 nova_compute[117331]: 2025-10-09 16:39:07.885 2 DEBUG nova.compute.manager [req-3257575d-ddf1-4392-9cdb-76dddbe7d591 req-ebee517c-aa9b-41c0-aaff-03a3d5c95cb7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Received event network-vif-unplugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:39:07 compute-0 nova_compute[117331]: 2025-10-09 16:39:07.886 2 DEBUG oslo_concurrency.lockutils [req-3257575d-ddf1-4392-9cdb-76dddbe7d591 req-ebee517c-aa9b-41c0-aaff-03a3d5c95cb7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:07 compute-0 nova_compute[117331]: 2025-10-09 16:39:07.887 2 DEBUG oslo_concurrency.lockutils [req-3257575d-ddf1-4392-9cdb-76dddbe7d591 req-ebee517c-aa9b-41c0-aaff-03a3d5c95cb7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:07 compute-0 nova_compute[117331]: 2025-10-09 16:39:07.887 2 DEBUG oslo_concurrency.lockutils [req-3257575d-ddf1-4392-9cdb-76dddbe7d591 req-ebee517c-aa9b-41c0-aaff-03a3d5c95cb7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:07 compute-0 nova_compute[117331]: 2025-10-09 16:39:07.888 2 DEBUG nova.compute.manager [req-3257575d-ddf1-4392-9cdb-76dddbe7d591 req-ebee517c-aa9b-41c0-aaff-03a3d5c95cb7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] No event matching network-vif-unplugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e in dict_keys([('network-vif-plugged', '50e8afe7-f8b4-4c50-b99a-5c611b86fe7e')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Oct 09 16:39:07 compute-0 nova_compute[117331]: 2025-10-09 16:39:07.889 2 DEBUG nova.compute.manager [req-3257575d-ddf1-4392-9cdb-76dddbe7d591 req-ebee517c-aa9b-41c0-aaff-03a3d5c95cb7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Received event network-vif-unplugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:39:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:08.399 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:39:08 compute-0 nova_compute[117331]: 2025-10-09 16:39:08.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:08 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:08.401 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:39:09 compute-0 nova_compute[117331]: 2025-10-09 16:39:09.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:09 compute-0 nova_compute[117331]: 2025-10-09 16:39:09.847 2 INFO nova.compute.manager [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Took 8.02 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Oct 09 16:39:09 compute-0 nova_compute[117331]: 2025-10-09 16:39:09.992 2 DEBUG nova.compute.manager [req-737bbe22-b3c5-433b-80a2-8737d08a321c req-5a1bf92a-f32a-45d2-8277-e9a1c4eada93 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Received event network-vif-plugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:39:09 compute-0 nova_compute[117331]: 2025-10-09 16:39:09.992 2 DEBUG oslo_concurrency.lockutils [req-737bbe22-b3c5-433b-80a2-8737d08a321c req-5a1bf92a-f32a-45d2-8277-e9a1c4eada93 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:09 compute-0 nova_compute[117331]: 2025-10-09 16:39:09.992 2 DEBUG oslo_concurrency.lockutils [req-737bbe22-b3c5-433b-80a2-8737d08a321c req-5a1bf92a-f32a-45d2-8277-e9a1c4eada93 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:09 compute-0 nova_compute[117331]: 2025-10-09 16:39:09.993 2 DEBUG oslo_concurrency.lockutils [req-737bbe22-b3c5-433b-80a2-8737d08a321c req-5a1bf92a-f32a-45d2-8277-e9a1c4eada93 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:09 compute-0 nova_compute[117331]: 2025-10-09 16:39:09.993 2 DEBUG nova.compute.manager [req-737bbe22-b3c5-433b-80a2-8737d08a321c req-5a1bf92a-f32a-45d2-8277-e9a1c4eada93 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Processing event network-vif-plugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:39:09 compute-0 nova_compute[117331]: 2025-10-09 16:39:09.993 2 DEBUG nova.compute.manager [req-737bbe22-b3c5-433b-80a2-8737d08a321c req-5a1bf92a-f32a-45d2-8277-e9a1c4eada93 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Received event network-changed-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:39:09 compute-0 nova_compute[117331]: 2025-10-09 16:39:09.993 2 DEBUG nova.compute.manager [req-737bbe22-b3c5-433b-80a2-8737d08a321c req-5a1bf92a-f32a-45d2-8277-e9a1c4eada93 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Refreshing instance network info cache due to event network-changed-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:39:09 compute-0 nova_compute[117331]: 2025-10-09 16:39:09.994 2 DEBUG oslo_concurrency.lockutils [req-737bbe22-b3c5-433b-80a2-8737d08a321c req-5a1bf92a-f32a-45d2-8277-e9a1c4eada93 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-56389b25-6fdb-41ad-ab62-b6872a816180" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:39:09 compute-0 nova_compute[117331]: 2025-10-09 16:39:09.994 2 DEBUG oslo_concurrency.lockutils [req-737bbe22-b3c5-433b-80a2-8737d08a321c req-5a1bf92a-f32a-45d2-8277-e9a1c4eada93 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-56389b25-6fdb-41ad-ab62-b6872a816180" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:39:09 compute-0 nova_compute[117331]: 2025-10-09 16:39:09.994 2 DEBUG nova.network.neutron [req-737bbe22-b3c5-433b-80a2-8737d08a321c req-5a1bf92a-f32a-45d2-8277-e9a1c4eada93 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Refreshing network info cache for port 50e8afe7-f8b4-4c50-b99a-5c611b86fe7e _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:39:09 compute-0 nova_compute[117331]: 2025-10-09 16:39:09.995 2 DEBUG nova.compute.manager [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:39:10 compute-0 nova_compute[117331]: 2025-10-09 16:39:10.501 2 WARNING neutronclient.v2_0.client [req-737bbe22-b3c5-433b-80a2-8737d08a321c req-5a1bf92a-f32a-45d2-8277-e9a1c4eada93 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:39:10 compute-0 nova_compute[117331]: 2025-10-09 16:39:10.506 2 DEBUG nova.compute.manager [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpehauqfuv',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='56389b25-6fdb-41ad-ab62-b6872a816180',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(604b7ad3-b402-4b63-a5d9-acef1fb93569),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Oct 09 16:39:11 compute-0 nova_compute[117331]: 2025-10-09 16:39:11.020 2 DEBUG nova.objects.instance [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'migration_context' on Instance uuid 56389b25-6fdb-41ad-ab62-b6872a816180 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:39:11 compute-0 nova_compute[117331]: 2025-10-09 16:39:11.021 2 DEBUG nova.virt.libvirt.driver [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Oct 09 16:39:11 compute-0 nova_compute[117331]: 2025-10-09 16:39:11.022 2 DEBUG nova.virt.libvirt.driver [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:39:11 compute-0 nova_compute[117331]: 2025-10-09 16:39:11.022 2 DEBUG nova.virt.libvirt.driver [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:39:11 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:11.402 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:39:11 compute-0 nova_compute[117331]: 2025-10-09 16:39:11.524 2 DEBUG nova.virt.libvirt.driver [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:39:11 compute-0 nova_compute[117331]: 2025-10-09 16:39:11.524 2 DEBUG nova.virt.libvirt.driver [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:39:11 compute-0 nova_compute[117331]: 2025-10-09 16:39:11.565 2 DEBUG nova.virt.libvirt.vif [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:38:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-623877253',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-6238772',id=29,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:38:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a60acfe52e4b4b7f912654a59f0978b7',ramdisk_id='',reservation_id='r-zwyn0omz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader,manager',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:38:28Z,user_data=None,user_id='685b4924c5a04af7ae6f4a328bb50f14',uuid=56389b25-6fdb-41ad-ab62-b6872a816180,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "address": "fa:16:3e:48:bc:99", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap50e8afe7-f8", "ovs_interfaceid": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:39:11 compute-0 nova_compute[117331]: 2025-10-09 16:39:11.565 2 DEBUG nova.network.os_vif_util [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "address": "fa:16:3e:48:bc:99", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap50e8afe7-f8", "ovs_interfaceid": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:39:11 compute-0 nova_compute[117331]: 2025-10-09 16:39:11.566 2 DEBUG nova.network.os_vif_util [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:bc:99,bridge_name='br-int',has_traffic_filtering=True,id=50e8afe7-f8b4-4c50-b99a-5c611b86fe7e,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8afe7-f8') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:39:11 compute-0 nova_compute[117331]: 2025-10-09 16:39:11.566 2 DEBUG nova.virt.libvirt.migration [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Updating guest XML with vif config: <interface type="ethernet">
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <mac address="fa:16:3e:48:bc:99"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <model type="virtio"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <mtu size="1442"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <target dev="tap50e8afe7-f8"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]: </interface>
Oct 09 16:39:11 compute-0 nova_compute[117331]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Oct 09 16:39:11 compute-0 nova_compute[117331]: 2025-10-09 16:39:11.567 2 DEBUG nova.virt.libvirt.migration [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <name>instance-0000001d</name>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <uuid>56389b25-6fdb-41ad-ab62-b6872a816180</uuid>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadStabilizationStrategy-server-623877253</nova:name>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:38:23</nova:creationTime>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:39:11 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:39:11 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:user uuid="685b4924c5a04af7ae6f4a328bb50f14">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin</nova:user>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:project uuid="a60acfe52e4b4b7f912654a59f0978b7">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673</nova:project>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:port uuid="50e8afe7-f8b4-4c50-b99a-5c611b86fe7e">
Oct 09 16:39:11 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <system>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <entry name="serial">56389b25-6fdb-41ad-ab62-b6872a816180</entry>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <entry name="uuid">56389b25-6fdb-41ad-ab62-b6872a816180</entry>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </system>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <os>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </os>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <features>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </features>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk.config"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:48:bc:99"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap50e8afe7-f8"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/console.log" append="off"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       </target>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/console.log" append="off"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </console>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </input>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <video>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </video>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]: </domain>
Oct 09 16:39:11 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Oct 09 16:39:11 compute-0 nova_compute[117331]: 2025-10-09 16:39:11.568 2 DEBUG nova.virt.libvirt.migration [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <name>instance-0000001d</name>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <uuid>56389b25-6fdb-41ad-ab62-b6872a816180</uuid>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadStabilizationStrategy-server-623877253</nova:name>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:38:23</nova:creationTime>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:39:11 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:39:11 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:user uuid="685b4924c5a04af7ae6f4a328bb50f14">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin</nova:user>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:project uuid="a60acfe52e4b4b7f912654a59f0978b7">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673</nova:project>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:port uuid="50e8afe7-f8b4-4c50-b99a-5c611b86fe7e">
Oct 09 16:39:11 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <system>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <entry name="serial">56389b25-6fdb-41ad-ab62-b6872a816180</entry>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <entry name="uuid">56389b25-6fdb-41ad-ab62-b6872a816180</entry>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </system>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <os>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </os>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <features>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </features>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk.config"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:48:bc:99"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap50e8afe7-f8"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/console.log" append="off"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       </target>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/console.log" append="off"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </console>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </input>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <video>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </video>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]: </domain>
Oct 09 16:39:11 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Oct 09 16:39:11 compute-0 nova_compute[117331]: 2025-10-09 16:39:11.570 2 DEBUG nova.virt.libvirt.migration [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _update_pci_xml output xml=<domain type="kvm">
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <name>instance-0000001d</name>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <uuid>56389b25-6fdb-41ad-ab62-b6872a816180</uuid>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadStabilizationStrategy-server-623877253</nova:name>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:38:23</nova:creationTime>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:39:11 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:39:11 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:user uuid="685b4924c5a04af7ae6f4a328bb50f14">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin</nova:user>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:project uuid="a60acfe52e4b4b7f912654a59f0978b7">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673</nova:project>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <nova:port uuid="50e8afe7-f8b4-4c50-b99a-5c611b86fe7e">
Oct 09 16:39:11 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <system>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <entry name="serial">56389b25-6fdb-41ad-ab62-b6872a816180</entry>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <entry name="uuid">56389b25-6fdb-41ad-ab62-b6872a816180</entry>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </system>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <os>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </os>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <features>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </features>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/disk.config"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:48:bc:99"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap50e8afe7-f8"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/console.log" append="off"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:39:11 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       </target>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180/console.log" append="off"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </console>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </input>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <video>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </video>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:39:11 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:39:11 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:39:11 compute-0 nova_compute[117331]: </domain>
Oct 09 16:39:11 compute-0 nova_compute[117331]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Oct 09 16:39:11 compute-0 nova_compute[117331]: 2025-10-09 16:39:11.570 2 DEBUG nova.virt.libvirt.driver [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Oct 09 16:39:11 compute-0 podman[153304]: 2025-10-09 16:39:11.83754577 +0000 UTC m=+0.054775805 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Oct 09 16:39:11 compute-0 podman[153305]: 2025-10-09 16:39:11.869618496 +0000 UTC m=+0.085190649 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 09 16:39:11 compute-0 nova_compute[117331]: 2025-10-09 16:39:11.877 2 WARNING neutronclient.v2_0.client [req-737bbe22-b3c5-433b-80a2-8737d08a321c req-5a1bf92a-f32a-45d2-8277-e9a1c4eada93 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:39:12 compute-0 nova_compute[117331]: 2025-10-09 16:39:12.027 2 DEBUG nova.virt.libvirt.migration [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Current None elapsed 1 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:39:12 compute-0 nova_compute[117331]: 2025-10-09 16:39:12.028 2 INFO nova.virt.libvirt.migration [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Increasing downtime to 50 ms after 0 sec elapsed time
Oct 09 16:39:12 compute-0 nova_compute[117331]: 2025-10-09 16:39:12.040 2 DEBUG nova.network.neutron [req-737bbe22-b3c5-433b-80a2-8737d08a321c req-5a1bf92a-f32a-45d2-8277-e9a1c4eada93 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Updated VIF entry in instance network info cache for port 50e8afe7-f8b4-4c50-b99a-5c611b86fe7e. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Oct 09 16:39:12 compute-0 nova_compute[117331]: 2025-10-09 16:39:12.041 2 DEBUG nova.network.neutron [req-737bbe22-b3c5-433b-80a2-8737d08a321c req-5a1bf92a-f32a-45d2-8277-e9a1c4eada93 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Updating instance_info_cache with network_info: [{"id": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "address": "fa:16:3e:48:bc:99", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8afe7-f8", "ovs_interfaceid": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:39:12 compute-0 nova_compute[117331]: 2025-10-09 16:39:12.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:12 compute-0 nova_compute[117331]: 2025-10-09 16:39:12.545 2 DEBUG oslo_concurrency.lockutils [req-737bbe22-b3c5-433b-80a2-8737d08a321c req-5a1bf92a-f32a-45d2-8277-e9a1c4eada93 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-56389b25-6fdb-41ad-ab62-b6872a816180" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:39:13 compute-0 nova_compute[117331]: 2025-10-09 16:39:13.050 2 INFO nova.virt.libvirt.driver [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Oct 09 16:39:13 compute-0 nova_compute[117331]: 2025-10-09 16:39:13.562 2 DEBUG nova.virt.libvirt.migration [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Current 50 elapsed 2 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:39:13 compute-0 nova_compute[117331]: 2025-10-09 16:39:13.563 2 DEBUG nova.virt.libvirt.migration [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Oct 09 16:39:14 compute-0 kernel: tap50e8afe7-f8 (unregistering): left promiscuous mode
Oct 09 16:39:14 compute-0 NetworkManager[1028]: <info>  [1760027954.0671] device (tap50e8afe7-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:39:14 compute-0 ovn_controller[19752]: 2025-10-09T16:39:14Z|00263|binding|INFO|Releasing lport 50e8afe7-f8b4-4c50-b99a-5c611b86fe7e from this chassis (sb_readonly=0)
Oct 09 16:39:14 compute-0 ovn_controller[19752]: 2025-10-09T16:39:14Z|00264|binding|INFO|Setting lport 50e8afe7-f8b4-4c50-b99a-5c611b86fe7e down in Southbound
Oct 09 16:39:14 compute-0 ovn_controller[19752]: 2025-10-09T16:39:14Z|00265|binding|INFO|Removing iface tap50e8afe7-f8 ovn-installed in OVS
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:14 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:14.082 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:bc:99 10.100.0.11'], port_security=['fa:16:3e:48:bc:99 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '2bd8bf21-1f6b-42c9-9656-9a72fa8dcbf3'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '56389b25-6fdb-41ad-ab62-b6872a816180', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a60acfe52e4b4b7f912654a59f0978b7', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'c47ae156-dc3a-4eb3-8eb2-126be8ba5497', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f19947fc-c2ef-4762-adfc-471bb24a038d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=50e8afe7-f8b4-4c50-b99a-5c611b86fe7e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:39:14 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:14.083 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 50e8afe7-f8b4-4c50-b99a-5c611b86fe7e in datapath cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 unbound from our chassis
Oct 09 16:39:14 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:14.085 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:14 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:14.099 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[87fe7fba-b15d-40de-ba4a-b53b8ac590af]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:39:14 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:14.125 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[c4b8be23-0783-4042-8299-e330ff8d68fd]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:39:14 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:14.128 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[ccca8805-9979-4d93-921e-c23ed8534240]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:39:14 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Oct 09 16:39:14 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000001d.scope: Consumed 14.034s CPU time.
Oct 09 16:39:14 compute-0 systemd-machined[77487]: Machine qemu-23-instance-0000001d terminated.
Oct 09 16:39:14 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:14.155 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[90e509c8-e076-439c-9acb-0422d748006f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:39:14 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:14.168 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a681007c-7f1a-4508-aae8-d479215bcff0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf3aa351-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:10:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 282941, 'reachable_time': 24617, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 153366, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:39:14 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:14.182 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[9bfd1e97-7f84-46a2-9953-c2ac4a814ec6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcf3aa351-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 282955, 'tstamp': 282955}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 153367, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcf3aa351-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 282959, 'tstamp': 282959}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 153367, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:39:14 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:14.183 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf3aa351-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:14 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:14.188 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf3aa351-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:39:14 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:14.188 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:39:14 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:14.188 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf3aa351-40, col_values=(('external_ids', {'iface-id': '7d2fd2fc-650d-4abc-8268-a14a8cdfd51e'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:39:14 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:14.188 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:39:14 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:14.189 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[52fd020f-2a0f-4dff-a0f8-4b522bcc29c5]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.305 2 DEBUG nova.virt.libvirt.guest [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.306 2 INFO nova.virt.libvirt.driver [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Migration operation has completed
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.306 2 INFO nova.compute.manager [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] _post_live_migration() is started..
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.309 2 DEBUG nova.virt.libvirt.driver [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.310 2 DEBUG nova.virt.libvirt.driver [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.310 2 DEBUG nova.virt.libvirt.driver [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.321 2 WARNING neutronclient.v2_0.client [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.321 2 WARNING neutronclient.v2_0.client [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.562 2 DEBUG nova.compute.manager [req-6d95d631-e26c-4bbe-aaeb-f1551e396d83 req-9671882e-8391-4ffd-a096-d94d61c71874 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Received event network-vif-unplugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.563 2 DEBUG oslo_concurrency.lockutils [req-6d95d631-e26c-4bbe-aaeb-f1551e396d83 req-9671882e-8391-4ffd-a096-d94d61c71874 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.563 2 DEBUG oslo_concurrency.lockutils [req-6d95d631-e26c-4bbe-aaeb-f1551e396d83 req-9671882e-8391-4ffd-a096-d94d61c71874 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.563 2 DEBUG oslo_concurrency.lockutils [req-6d95d631-e26c-4bbe-aaeb-f1551e396d83 req-9671882e-8391-4ffd-a096-d94d61c71874 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.563 2 DEBUG nova.compute.manager [req-6d95d631-e26c-4bbe-aaeb-f1551e396d83 req-9671882e-8391-4ffd-a096-d94d61c71874 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] No waiting events found dispatching network-vif-unplugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.564 2 DEBUG nova.compute.manager [req-6d95d631-e26c-4bbe-aaeb-f1551e396d83 req-9671882e-8391-4ffd-a096-d94d61c71874 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Received event network-vif-unplugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.842 2 DEBUG nova.network.neutron [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Activated binding for port 50e8afe7-f8b4-4c50-b99a-5c611b86fe7e and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.842 2 DEBUG nova.compute.manager [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "address": "fa:16:3e:48:bc:99", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8afe7-f8", "ovs_interfaceid": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.844 2 DEBUG nova.virt.libvirt.vif [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:38:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-623877253',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-6238772',id=29,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:38:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a60acfe52e4b4b7f912654a59f0978b7',ramdisk_id='',reservation_id='r-zwyn0omz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader,manager',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:38:51Z,user_data=None,user_id='685b4924c5a04af7ae6f4a328bb50f14',uuid=56389b25-6fdb-41ad-ab62-b6872a816180,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "address": "fa:16:3e:48:bc:99", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8afe7-f8", "ovs_interfaceid": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.844 2 DEBUG nova.network.os_vif_util [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "address": "fa:16:3e:48:bc:99", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8afe7-f8", "ovs_interfaceid": "50e8afe7-f8b4-4c50-b99a-5c611b86fe7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.846 2 DEBUG nova.network.os_vif_util [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:bc:99,bridge_name='br-int',has_traffic_filtering=True,id=50e8afe7-f8b4-4c50-b99a-5c611b86fe7e,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8afe7-f8') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.846 2 DEBUG os_vif [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:bc:99,bridge_name='br-int',has_traffic_filtering=True,id=50e8afe7-f8b4-4c50-b99a-5c611b86fe7e,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8afe7-f8') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.850 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50e8afe7-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.890 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=13cc15f9-ef9d-427c-8705-63a2d0220485) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.895 2 INFO os_vif [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:bc:99,bridge_name='br-int',has_traffic_filtering=True,id=50e8afe7-f8b4-4c50-b99a-5c611b86fe7e,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8afe7-f8')
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.895 2 DEBUG oslo_concurrency.lockutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.896 2 DEBUG oslo_concurrency.lockutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.896 2 DEBUG oslo_concurrency.lockutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.897 2 DEBUG nova.compute.manager [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.898 2 INFO nova.virt.libvirt.driver [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Deleting instance files /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180_del
Oct 09 16:39:14 compute-0 nova_compute[117331]: 2025-10-09 16:39:14.899 2 INFO nova.virt.libvirt.driver [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Deletion of /var/lib/nova/instances/56389b25-6fdb-41ad-ab62-b6872a816180_del complete
Oct 09 16:39:16 compute-0 nova_compute[117331]: 2025-10-09 16:39:16.635 2 DEBUG nova.compute.manager [req-9e356e7a-575e-4238-9ade-7efeac529dbd req-512f780b-46c2-46df-b341-69e0842d39ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Received event network-vif-plugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:39:16 compute-0 nova_compute[117331]: 2025-10-09 16:39:16.636 2 DEBUG oslo_concurrency.lockutils [req-9e356e7a-575e-4238-9ade-7efeac529dbd req-512f780b-46c2-46df-b341-69e0842d39ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:16 compute-0 nova_compute[117331]: 2025-10-09 16:39:16.636 2 DEBUG oslo_concurrency.lockutils [req-9e356e7a-575e-4238-9ade-7efeac529dbd req-512f780b-46c2-46df-b341-69e0842d39ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:16 compute-0 nova_compute[117331]: 2025-10-09 16:39:16.637 2 DEBUG oslo_concurrency.lockutils [req-9e356e7a-575e-4238-9ade-7efeac529dbd req-512f780b-46c2-46df-b341-69e0842d39ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:16 compute-0 nova_compute[117331]: 2025-10-09 16:39:16.637 2 DEBUG nova.compute.manager [req-9e356e7a-575e-4238-9ade-7efeac529dbd req-512f780b-46c2-46df-b341-69e0842d39ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] No waiting events found dispatching network-vif-plugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:39:16 compute-0 nova_compute[117331]: 2025-10-09 16:39:16.637 2 WARNING nova.compute.manager [req-9e356e7a-575e-4238-9ade-7efeac529dbd req-512f780b-46c2-46df-b341-69e0842d39ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Received unexpected event network-vif-plugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e for instance with vm_state active and task_state migrating.
Oct 09 16:39:16 compute-0 nova_compute[117331]: 2025-10-09 16:39:16.638 2 DEBUG nova.compute.manager [req-9e356e7a-575e-4238-9ade-7efeac529dbd req-512f780b-46c2-46df-b341-69e0842d39ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Received event network-vif-unplugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:39:16 compute-0 nova_compute[117331]: 2025-10-09 16:39:16.638 2 DEBUG oslo_concurrency.lockutils [req-9e356e7a-575e-4238-9ade-7efeac529dbd req-512f780b-46c2-46df-b341-69e0842d39ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:16 compute-0 nova_compute[117331]: 2025-10-09 16:39:16.638 2 DEBUG oslo_concurrency.lockutils [req-9e356e7a-575e-4238-9ade-7efeac529dbd req-512f780b-46c2-46df-b341-69e0842d39ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:16 compute-0 nova_compute[117331]: 2025-10-09 16:39:16.638 2 DEBUG oslo_concurrency.lockutils [req-9e356e7a-575e-4238-9ade-7efeac529dbd req-512f780b-46c2-46df-b341-69e0842d39ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:16 compute-0 nova_compute[117331]: 2025-10-09 16:39:16.639 2 DEBUG nova.compute.manager [req-9e356e7a-575e-4238-9ade-7efeac529dbd req-512f780b-46c2-46df-b341-69e0842d39ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] No waiting events found dispatching network-vif-unplugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:39:16 compute-0 nova_compute[117331]: 2025-10-09 16:39:16.639 2 DEBUG nova.compute.manager [req-9e356e7a-575e-4238-9ade-7efeac529dbd req-512f780b-46c2-46df-b341-69e0842d39ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Received event network-vif-unplugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:39:16 compute-0 nova_compute[117331]: 2025-10-09 16:39:16.639 2 DEBUG nova.compute.manager [req-9e356e7a-575e-4238-9ade-7efeac529dbd req-512f780b-46c2-46df-b341-69e0842d39ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Received event network-vif-plugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:39:16 compute-0 nova_compute[117331]: 2025-10-09 16:39:16.639 2 DEBUG oslo_concurrency.lockutils [req-9e356e7a-575e-4238-9ade-7efeac529dbd req-512f780b-46c2-46df-b341-69e0842d39ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:16 compute-0 nova_compute[117331]: 2025-10-09 16:39:16.640 2 DEBUG oslo_concurrency.lockutils [req-9e356e7a-575e-4238-9ade-7efeac529dbd req-512f780b-46c2-46df-b341-69e0842d39ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:16 compute-0 nova_compute[117331]: 2025-10-09 16:39:16.640 2 DEBUG oslo_concurrency.lockutils [req-9e356e7a-575e-4238-9ade-7efeac529dbd req-512f780b-46c2-46df-b341-69e0842d39ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:16 compute-0 nova_compute[117331]: 2025-10-09 16:39:16.640 2 DEBUG nova.compute.manager [req-9e356e7a-575e-4238-9ade-7efeac529dbd req-512f780b-46c2-46df-b341-69e0842d39ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] No waiting events found dispatching network-vif-plugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:39:16 compute-0 nova_compute[117331]: 2025-10-09 16:39:16.640 2 WARNING nova.compute.manager [req-9e356e7a-575e-4238-9ade-7efeac529dbd req-512f780b-46c2-46df-b341-69e0842d39ac ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Received unexpected event network-vif-plugged-50e8afe7-f8b4-4c50-b99a-5c611b86fe7e for instance with vm_state active and task_state migrating.
Oct 09 16:39:17 compute-0 nova_compute[117331]: 2025-10-09 16:39:17.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:19 compute-0 nova_compute[117331]: 2025-10-09 16:39:19.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:20 compute-0 podman[153400]: 2025-10-09 16:39:20.849704537 +0000 UTC m=+0.085802967 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, architecture=x86_64, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 09 16:39:20 compute-0 podman[153401]: 2025-10-09 16:39:20.868202703 +0000 UTC m=+0.092855731 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 16:39:22 compute-0 nova_compute[117331]: 2025-10-09 16:39:22.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:23 compute-0 nova_compute[117331]: 2025-10-09 16:39:23.934 2 DEBUG oslo_concurrency.lockutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:23 compute-0 nova_compute[117331]: 2025-10-09 16:39:23.935 2 DEBUG oslo_concurrency.lockutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:23 compute-0 nova_compute[117331]: 2025-10-09 16:39:23.935 2 DEBUG oslo_concurrency.lockutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "56389b25-6fdb-41ad-ab62-b6872a816180-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:24 compute-0 nova_compute[117331]: 2025-10-09 16:39:24.448 2 DEBUG oslo_concurrency.lockutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:24 compute-0 nova_compute[117331]: 2025-10-09 16:39:24.448 2 DEBUG oslo_concurrency.lockutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:24 compute-0 nova_compute[117331]: 2025-10-09 16:39:24.449 2 DEBUG oslo_concurrency.lockutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:24 compute-0 nova_compute[117331]: 2025-10-09 16:39:24.450 2 DEBUG nova.compute.resource_tracker [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:39:24 compute-0 nova_compute[117331]: 2025-10-09 16:39:24.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:25 compute-0 nova_compute[117331]: 2025-10-09 16:39:25.496 2 DEBUG oslo_concurrency.processutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:39:25 compute-0 nova_compute[117331]: 2025-10-09 16:39:25.543 2 DEBUG oslo_concurrency.processutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:39:25 compute-0 nova_compute[117331]: 2025-10-09 16:39:25.545 2 DEBUG oslo_concurrency.processutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:39:25 compute-0 nova_compute[117331]: 2025-10-09 16:39:25.622 2 DEBUG oslo_concurrency.processutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:39:25 compute-0 nova_compute[117331]: 2025-10-09 16:39:25.795 2 WARNING nova.virt.libvirt.driver [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:39:25 compute-0 nova_compute[117331]: 2025-10-09 16:39:25.797 2 DEBUG oslo_concurrency.processutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:39:25 compute-0 nova_compute[117331]: 2025-10-09 16:39:25.817 2 DEBUG oslo_concurrency.processutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.020s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:39:25 compute-0 nova_compute[117331]: 2025-10-09 16:39:25.818 2 DEBUG nova.compute.resource_tracker [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5964MB free_disk=73.22063446044922GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:39:25 compute-0 nova_compute[117331]: 2025-10-09 16:39:25.818 2 DEBUG oslo_concurrency.lockutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:25 compute-0 nova_compute[117331]: 2025-10-09 16:39:25.819 2 DEBUG oslo_concurrency.lockutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:26 compute-0 nova_compute[117331]: 2025-10-09 16:39:26.838 2 DEBUG nova.compute.resource_tracker [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration for instance 56389b25-6fdb-41ad-ab62-b6872a816180 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 09 16:39:27 compute-0 nova_compute[117331]: 2025-10-09 16:39:27.346 2 DEBUG nova.compute.resource_tracker [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Oct 09 16:39:27 compute-0 nova_compute[117331]: 2025-10-09 16:39:27.347 2 INFO nova.compute.resource_tracker [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Updating resource usage from migration 23cf222b-3267-4a00-82ba-480cb5d9bc11
Oct 09 16:39:27 compute-0 nova_compute[117331]: 2025-10-09 16:39:27.374 2 DEBUG nova.compute.resource_tracker [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration 604b7ad3-b402-4b63-a5d9-acef1fb93569 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:39:27 compute-0 nova_compute[117331]: 2025-10-09 16:39:27.374 2 DEBUG nova.compute.resource_tracker [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration 23cf222b-3267-4a00-82ba-480cb5d9bc11 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:39:27 compute-0 nova_compute[117331]: 2025-10-09 16:39:27.374 2 DEBUG nova.compute.resource_tracker [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:39:27 compute-0 nova_compute[117331]: 2025-10-09 16:39:27.375 2 DEBUG nova.compute.resource_tracker [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:39:25 up 48 min,  0 user,  load average: 0.38, 0.48, 0.46\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_migrating': '1', 'num_os_type_None': '1', 'num_proj_a60acfe52e4b4b7f912654a59f0978b7': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:39:27 compute-0 nova_compute[117331]: 2025-10-09 16:39:27.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:27 compute-0 nova_compute[117331]: 2025-10-09 16:39:27.616 2 DEBUG nova.compute.provider_tree [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:39:28 compute-0 nova_compute[117331]: 2025-10-09 16:39:28.122 2 DEBUG nova.scheduler.client.report [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:39:28 compute-0 nova_compute[117331]: 2025-10-09 16:39:28.632 2 DEBUG nova.compute.resource_tracker [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:39:28 compute-0 nova_compute[117331]: 2025-10-09 16:39:28.632 2 DEBUG oslo_concurrency.lockutils [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.814s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:28 compute-0 nova_compute[117331]: 2025-10-09 16:39:28.648 2 INFO nova.compute.manager [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Oct 09 16:39:29 compute-0 nova_compute[117331]: 2025-10-09 16:39:29.723 2 INFO nova.scheduler.client.report [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Deleted allocation for migration 604b7ad3-b402-4b63-a5d9-acef1fb93569
Oct 09 16:39:29 compute-0 nova_compute[117331]: 2025-10-09 16:39:29.724 2 DEBUG nova.virt.libvirt.driver [None req-dc1aa2ae-bf27-4733-9d1e-fa72df9f61f8 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 56389b25-6fdb-41ad-ab62-b6872a816180] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Oct 09 16:39:29 compute-0 podman[127775]: time="2025-10-09T16:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:39:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:39:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3496 "" "Go-http-client/1.1"
Oct 09 16:39:29 compute-0 nova_compute[117331]: 2025-10-09 16:39:29.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:30 compute-0 nova_compute[117331]: 2025-10-09 16:39:30.741 2 DEBUG oslo_concurrency.processutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:39:30 compute-0 nova_compute[117331]: 2025-10-09 16:39:30.832 2 DEBUG oslo_concurrency.processutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:39:30 compute-0 nova_compute[117331]: 2025-10-09 16:39:30.833 2 DEBUG oslo_concurrency.processutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:39:30 compute-0 nova_compute[117331]: 2025-10-09 16:39:30.890 2 DEBUG oslo_concurrency.processutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:39:30 compute-0 nova_compute[117331]: 2025-10-09 16:39:30.892 2 DEBUG nova.compute.manager [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Preparing to wait for external event network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:39:30 compute-0 nova_compute[117331]: 2025-10-09 16:39:30.892 2 DEBUG oslo_concurrency.lockutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:30 compute-0 nova_compute[117331]: 2025-10-09 16:39:30.892 2 DEBUG oslo_concurrency.lockutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:30 compute-0 nova_compute[117331]: 2025-10-09 16:39:30.892 2 DEBUG oslo_concurrency.lockutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:31 compute-0 nova_compute[117331]: 2025-10-09 16:39:31.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:39:31 compute-0 nova_compute[117331]: 2025-10-09 16:39:31.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:39:31 compute-0 openstack_network_exporter[129925]: ERROR   16:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:39:31 compute-0 openstack_network_exporter[129925]: ERROR   16:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:39:31 compute-0 openstack_network_exporter[129925]: ERROR   16:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:39:31 compute-0 openstack_network_exporter[129925]: ERROR   16:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:39:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:39:31 compute-0 openstack_network_exporter[129925]: ERROR   16:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:39:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:39:31 compute-0 podman[153461]: 2025-10-09 16:39:31.864506029 +0000 UTC m=+0.085975573 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest)
Oct 09 16:39:32 compute-0 nova_compute[117331]: 2025-10-09 16:39:32.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:33 compute-0 sshd-session[153483]: Connection closed by 124.60.67.43 port 57648
Oct 09 16:39:34 compute-0 nova_compute[117331]: 2025-10-09 16:39:34.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:35.338 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:35.338 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:35.339 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:36 compute-0 nova_compute[117331]: 2025-10-09 16:39:36.308 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:39:36 compute-0 nova_compute[117331]: 2025-10-09 16:39:36.575 2 DEBUG nova.compute.manager [req-751a89e2-e13b-4641-b991-c33b4f507066 req-46b19f6d-24c6-4f41-a41a-e5b71a5fdc85 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received event network-vif-unplugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:39:36 compute-0 nova_compute[117331]: 2025-10-09 16:39:36.575 2 DEBUG oslo_concurrency.lockutils [req-751a89e2-e13b-4641-b991-c33b4f507066 req-46b19f6d-24c6-4f41-a41a-e5b71a5fdc85 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:36 compute-0 nova_compute[117331]: 2025-10-09 16:39:36.576 2 DEBUG oslo_concurrency.lockutils [req-751a89e2-e13b-4641-b991-c33b4f507066 req-46b19f6d-24c6-4f41-a41a-e5b71a5fdc85 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:36 compute-0 nova_compute[117331]: 2025-10-09 16:39:36.576 2 DEBUG oslo_concurrency.lockutils [req-751a89e2-e13b-4641-b991-c33b4f507066 req-46b19f6d-24c6-4f41-a41a-e5b71a5fdc85 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:36 compute-0 nova_compute[117331]: 2025-10-09 16:39:36.576 2 DEBUG nova.compute.manager [req-751a89e2-e13b-4641-b991-c33b4f507066 req-46b19f6d-24c6-4f41-a41a-e5b71a5fdc85 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] No event matching network-vif-unplugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 in dict_keys([('network-vif-plugged', '0060f48c-30fd-4d98-810c-c9953d57fbb2')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Oct 09 16:39:36 compute-0 nova_compute[117331]: 2025-10-09 16:39:36.577 2 DEBUG nova.compute.manager [req-751a89e2-e13b-4641-b991-c33b4f507066 req-46b19f6d-24c6-4f41-a41a-e5b71a5fdc85 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received event network-vif-unplugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:39:36 compute-0 nova_compute[117331]: 2025-10-09 16:39:36.821 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:36 compute-0 nova_compute[117331]: 2025-10-09 16:39:36.822 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:36 compute-0 nova_compute[117331]: 2025-10-09 16:39:36.822 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:36 compute-0 nova_compute[117331]: 2025-10-09 16:39:36.822 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:39:36 compute-0 podman[153487]: 2025-10-09 16:39:36.836575502 +0000 UTC m=+0.062062336 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:39:37 compute-0 nova_compute[117331]: 2025-10-09 16:39:37.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:37 compute-0 nova_compute[117331]: 2025-10-09 16:39:37.864 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:39:37 compute-0 nova_compute[117331]: 2025-10-09 16:39:37.927 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:39:37 compute-0 nova_compute[117331]: 2025-10-09 16:39:37.928 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:39:37 compute-0 nova_compute[117331]: 2025-10-09 16:39:37.992 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:39:38 compute-0 nova_compute[117331]: 2025-10-09 16:39:38.129 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:39:38 compute-0 nova_compute[117331]: 2025-10-09 16:39:38.130 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:39:38 compute-0 nova_compute[117331]: 2025-10-09 16:39:38.149 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.020s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:39:38 compute-0 nova_compute[117331]: 2025-10-09 16:39:38.150 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5959MB free_disk=73.22060012817383GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:39:38 compute-0 nova_compute[117331]: 2025-10-09 16:39:38.150 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:38 compute-0 nova_compute[117331]: 2025-10-09 16:39:38.151 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:38 compute-0 nova_compute[117331]: 2025-10-09 16:39:38.419 2 INFO nova.compute.manager [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Took 7.53 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Oct 09 16:39:38 compute-0 nova_compute[117331]: 2025-10-09 16:39:38.619 2 DEBUG nova.compute.manager [req-70b0d656-3337-4c93-8a16-56c408931f55 req-62443060-d358-4d80-8dcf-7d9adc3f6ae5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received event network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:39:38 compute-0 nova_compute[117331]: 2025-10-09 16:39:38.619 2 DEBUG oslo_concurrency.lockutils [req-70b0d656-3337-4c93-8a16-56c408931f55 req-62443060-d358-4d80-8dcf-7d9adc3f6ae5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:38 compute-0 nova_compute[117331]: 2025-10-09 16:39:38.619 2 DEBUG oslo_concurrency.lockutils [req-70b0d656-3337-4c93-8a16-56c408931f55 req-62443060-d358-4d80-8dcf-7d9adc3f6ae5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:38 compute-0 nova_compute[117331]: 2025-10-09 16:39:38.620 2 DEBUG oslo_concurrency.lockutils [req-70b0d656-3337-4c93-8a16-56c408931f55 req-62443060-d358-4d80-8dcf-7d9adc3f6ae5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:38 compute-0 nova_compute[117331]: 2025-10-09 16:39:38.620 2 DEBUG nova.compute.manager [req-70b0d656-3337-4c93-8a16-56c408931f55 req-62443060-d358-4d80-8dcf-7d9adc3f6ae5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Processing event network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:39:38 compute-0 nova_compute[117331]: 2025-10-09 16:39:38.620 2 DEBUG nova.compute.manager [req-70b0d656-3337-4c93-8a16-56c408931f55 req-62443060-d358-4d80-8dcf-7d9adc3f6ae5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received event network-changed-0060f48c-30fd-4d98-810c-c9953d57fbb2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:39:38 compute-0 nova_compute[117331]: 2025-10-09 16:39:38.620 2 DEBUG nova.compute.manager [req-70b0d656-3337-4c93-8a16-56c408931f55 req-62443060-d358-4d80-8dcf-7d9adc3f6ae5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Refreshing instance network info cache due to event network-changed-0060f48c-30fd-4d98-810c-c9953d57fbb2. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:39:38 compute-0 nova_compute[117331]: 2025-10-09 16:39:38.620 2 DEBUG oslo_concurrency.lockutils [req-70b0d656-3337-4c93-8a16-56c408931f55 req-62443060-d358-4d80-8dcf-7d9adc3f6ae5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-5963f55d-5e3f-4b07-86ef-554333267b8f" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:39:38 compute-0 nova_compute[117331]: 2025-10-09 16:39:38.620 2 DEBUG oslo_concurrency.lockutils [req-70b0d656-3337-4c93-8a16-56c408931f55 req-62443060-d358-4d80-8dcf-7d9adc3f6ae5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-5963f55d-5e3f-4b07-86ef-554333267b8f" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:39:38 compute-0 nova_compute[117331]: 2025-10-09 16:39:38.621 2 DEBUG nova.network.neutron [req-70b0d656-3337-4c93-8a16-56c408931f55 req-62443060-d358-4d80-8dcf-7d9adc3f6ae5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Refreshing network info cache for port 0060f48c-30fd-4d98-810c-c9953d57fbb2 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:39:38 compute-0 nova_compute[117331]: 2025-10-09 16:39:38.622 2 DEBUG nova.compute.manager [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:39:39 compute-0 nova_compute[117331]: 2025-10-09 16:39:39.128 2 WARNING neutronclient.v2_0.client [req-70b0d656-3337-4c93-8a16-56c408931f55 req-62443060-d358-4d80-8dcf-7d9adc3f6ae5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:39:39 compute-0 nova_compute[117331]: 2025-10-09 16:39:39.133 2 DEBUG nova.compute.manager [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp9ok_jhlg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='5963f55d-5e3f-4b07-86ef-554333267b8f',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(23cf222b-3267-4a00-82ba-480cb5d9bc11),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Oct 09 16:39:39 compute-0 nova_compute[117331]: 2025-10-09 16:39:39.170 2 INFO nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Updating resource usage from migration 23cf222b-3267-4a00-82ba-480cb5d9bc11
Oct 09 16:39:39 compute-0 nova_compute[117331]: 2025-10-09 16:39:39.249 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Migration 23cf222b-3267-4a00-82ba-480cb5d9bc11 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:39:39 compute-0 nova_compute[117331]: 2025-10-09 16:39:39.249 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:39:39 compute-0 nova_compute[117331]: 2025-10-09 16:39:39.249 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:39:38 up 48 min,  0 user,  load average: 0.29, 0.45, 0.45\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_migrating': '1', 'num_os_type_None': '1', 'num_proj_a60acfe52e4b4b7f912654a59f0978b7': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:39:39 compute-0 nova_compute[117331]: 2025-10-09 16:39:39.312 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:39:39 compute-0 nova_compute[117331]: 2025-10-09 16:39:39.509 2 WARNING neutronclient.v2_0.client [req-70b0d656-3337-4c93-8a16-56c408931f55 req-62443060-d358-4d80-8dcf-7d9adc3f6ae5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:39:39 compute-0 nova_compute[117331]: 2025-10-09 16:39:39.645 2 DEBUG nova.network.neutron [req-70b0d656-3337-4c93-8a16-56c408931f55 req-62443060-d358-4d80-8dcf-7d9adc3f6ae5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Updated VIF entry in instance network info cache for port 0060f48c-30fd-4d98-810c-c9953d57fbb2. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Oct 09 16:39:39 compute-0 nova_compute[117331]: 2025-10-09 16:39:39.646 2 DEBUG nova.network.neutron [req-70b0d656-3337-4c93-8a16-56c408931f55 req-62443060-d358-4d80-8dcf-7d9adc3f6ae5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Updating instance_info_cache with network_info: [{"id": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "address": "fa:16:3e:36:08:54", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0060f48c-30", "ovs_interfaceid": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:39:39 compute-0 nova_compute[117331]: 2025-10-09 16:39:39.647 2 DEBUG nova.objects.instance [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'migration_context' on Instance uuid 5963f55d-5e3f-4b07-86ef-554333267b8f obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:39:39 compute-0 nova_compute[117331]: 2025-10-09 16:39:39.648 2 DEBUG nova.virt.libvirt.driver [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Oct 09 16:39:39 compute-0 nova_compute[117331]: 2025-10-09 16:39:39.650 2 DEBUG nova.virt.libvirt.driver [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:39:39 compute-0 nova_compute[117331]: 2025-10-09 16:39:39.651 2 DEBUG nova.virt.libvirt.driver [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:39:39 compute-0 nova_compute[117331]: 2025-10-09 16:39:39.817 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:39:39 compute-0 nova_compute[117331]: 2025-10-09 16:39:39.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:40 compute-0 nova_compute[117331]: 2025-10-09 16:39:40.152 2 DEBUG nova.virt.libvirt.driver [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:39:40 compute-0 nova_compute[117331]: 2025-10-09 16:39:40.153 2 DEBUG nova.virt.libvirt.driver [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:39:40 compute-0 nova_compute[117331]: 2025-10-09 16:39:40.154 2 DEBUG oslo_concurrency.lockutils [req-70b0d656-3337-4c93-8a16-56c408931f55 req-62443060-d358-4d80-8dcf-7d9adc3f6ae5 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-5963f55d-5e3f-4b07-86ef-554333267b8f" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:39:40 compute-0 nova_compute[117331]: 2025-10-09 16:39:40.160 2 DEBUG nova.virt.libvirt.vif [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:37:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-1807729568',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-1807729',id=28,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:38:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a60acfe52e4b4b7f912654a59f0978b7',ramdisk_id='',reservation_id='r-hkns0u0y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader,manager',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:38:07Z,user_data=None,user_id='685b4924c5a04af7ae6f4a328bb50f14',uuid=5963f55d-5e3f-4b07-86ef-554333267b8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "address": "fa:16:3e:36:08:54", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap0060f48c-30", "ovs_interfaceid": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:39:40 compute-0 nova_compute[117331]: 2025-10-09 16:39:40.161 2 DEBUG nova.network.os_vif_util [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "address": "fa:16:3e:36:08:54", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap0060f48c-30", "ovs_interfaceid": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:39:40 compute-0 nova_compute[117331]: 2025-10-09 16:39:40.162 2 DEBUG nova.network.os_vif_util [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:08:54,bridge_name='br-int',has_traffic_filtering=True,id=0060f48c-30fd-4d98-810c-c9953d57fbb2,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0060f48c-30') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:39:40 compute-0 nova_compute[117331]: 2025-10-09 16:39:40.163 2 DEBUG nova.virt.libvirt.migration [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Updating guest XML with vif config: <interface type="ethernet">
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <mac address="fa:16:3e:36:08:54"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <model type="virtio"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <mtu size="1442"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <target dev="tap0060f48c-30"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]: </interface>
Oct 09 16:39:40 compute-0 nova_compute[117331]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Oct 09 16:39:40 compute-0 nova_compute[117331]: 2025-10-09 16:39:40.164 2 DEBUG nova.virt.libvirt.migration [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <name>instance-0000001c</name>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <uuid>5963f55d-5e3f-4b07-86ef-554333267b8f</uuid>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadStabilizationStrategy-server-1807729568</nova:name>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:38:02</nova:creationTime>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:39:40 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:39:40 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:user uuid="685b4924c5a04af7ae6f4a328bb50f14">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin</nova:user>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:project uuid="a60acfe52e4b4b7f912654a59f0978b7">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673</nova:project>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:port uuid="0060f48c-30fd-4d98-810c-c9953d57fbb2">
Oct 09 16:39:40 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <system>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <entry name="serial">5963f55d-5e3f-4b07-86ef-554333267b8f</entry>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <entry name="uuid">5963f55d-5e3f-4b07-86ef-554333267b8f</entry>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </system>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <os>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </os>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <features>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </features>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk.config"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:36:08:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0060f48c-30"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/console.log" append="off"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       </target>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/console.log" append="off"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </console>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </input>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <video>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </video>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]: </domain>
Oct 09 16:39:40 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Oct 09 16:39:40 compute-0 nova_compute[117331]: 2025-10-09 16:39:40.166 2 DEBUG nova.virt.libvirt.migration [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <name>instance-0000001c</name>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <uuid>5963f55d-5e3f-4b07-86ef-554333267b8f</uuid>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadStabilizationStrategy-server-1807729568</nova:name>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:38:02</nova:creationTime>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:39:40 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:39:40 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:user uuid="685b4924c5a04af7ae6f4a328bb50f14">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin</nova:user>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:project uuid="a60acfe52e4b4b7f912654a59f0978b7">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673</nova:project>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:port uuid="0060f48c-30fd-4d98-810c-c9953d57fbb2">
Oct 09 16:39:40 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <system>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <entry name="serial">5963f55d-5e3f-4b07-86ef-554333267b8f</entry>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <entry name="uuid">5963f55d-5e3f-4b07-86ef-554333267b8f</entry>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </system>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <os>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </os>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <features>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </features>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk.config"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:36:08:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0060f48c-30"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/console.log" append="off"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       </target>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/console.log" append="off"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </console>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </input>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <video>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </video>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]: </domain>
Oct 09 16:39:40 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Oct 09 16:39:40 compute-0 nova_compute[117331]: 2025-10-09 16:39:40.166 2 DEBUG nova.virt.libvirt.migration [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _update_pci_xml output xml=<domain type="kvm">
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <name>instance-0000001c</name>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <uuid>5963f55d-5e3f-4b07-86ef-554333267b8f</uuid>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteWorkloadStabilizationStrategy-server-1807729568</nova:name>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:38:02</nova:creationTime>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:39:40 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:39:40 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:user uuid="685b4924c5a04af7ae6f4a328bb50f14">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin</nova:user>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:project uuid="a60acfe52e4b4b7f912654a59f0978b7">tempest-TestExecuteWorkloadStabilizationStrategy-1000021673</nova:project>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <nova:port uuid="0060f48c-30fd-4d98-810c-c9953d57fbb2">
Oct 09 16:39:40 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <system>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <entry name="serial">5963f55d-5e3f-4b07-86ef-554333267b8f</entry>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <entry name="uuid">5963f55d-5e3f-4b07-86ef-554333267b8f</entry>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </system>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <os>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </os>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <features>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </features>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/disk.config"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:36:08:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0060f48c-30"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/console.log" append="off"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:39:40 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       </target>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f/console.log" append="off"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </console>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </input>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <video>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </video>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:39:40 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:39:40 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:39:40 compute-0 nova_compute[117331]: </domain>
Oct 09 16:39:40 compute-0 nova_compute[117331]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Oct 09 16:39:40 compute-0 nova_compute[117331]: 2025-10-09 16:39:40.167 2 DEBUG nova.virt.libvirt.driver [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Oct 09 16:39:40 compute-0 nova_compute[117331]: 2025-10-09 16:39:40.325 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:39:40 compute-0 nova_compute[117331]: 2025-10-09 16:39:40.326 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.175s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:40 compute-0 nova_compute[117331]: 2025-10-09 16:39:40.656 2 DEBUG nova.virt.libvirt.migration [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Current None elapsed 1 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:39:40 compute-0 nova_compute[117331]: 2025-10-09 16:39:40.657 2 INFO nova.virt.libvirt.migration [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Increasing downtime to 50 ms after 0 sec elapsed time
Oct 09 16:39:41 compute-0 nova_compute[117331]: 2025-10-09 16:39:41.320 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:39:41 compute-0 nova_compute[117331]: 2025-10-09 16:39:41.321 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:39:41 compute-0 nova_compute[117331]: 2025-10-09 16:39:41.321 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:39:41 compute-0 nova_compute[117331]: 2025-10-09 16:39:41.321 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:39:41 compute-0 nova_compute[117331]: 2025-10-09 16:39:41.676 2 INFO nova.virt.libvirt.driver [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Oct 09 16:39:42 compute-0 kernel: tap0060f48c-30 (unregistering): left promiscuous mode
Oct 09 16:39:42 compute-0 NetworkManager[1028]: <info>  [1760027982.0565] device (tap0060f48c-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:39:42 compute-0 ovn_controller[19752]: 2025-10-09T16:39:42Z|00266|binding|INFO|Releasing lport 0060f48c-30fd-4d98-810c-c9953d57fbb2 from this chassis (sb_readonly=0)
Oct 09 16:39:42 compute-0 ovn_controller[19752]: 2025-10-09T16:39:42Z|00267|binding|INFO|Setting lport 0060f48c-30fd-4d98-810c-c9953d57fbb2 down in Southbound
Oct 09 16:39:42 compute-0 ovn_controller[19752]: 2025-10-09T16:39:42Z|00268|binding|INFO|Removing iface tap0060f48c-30 ovn-installed in OVS
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.069 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:08:54 10.100.0.12'], port_security=['fa:16:3e:36:08:54 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '2bd8bf21-1f6b-42c9-9656-9a72fa8dcbf3'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5963f55d-5e3f-4b07-86ef-554333267b8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a60acfe52e4b4b7f912654a59f0978b7', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'c47ae156-dc3a-4eb3-8eb2-126be8ba5497', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f19947fc-c2ef-4762-adfc-471bb24a038d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=0060f48c-30fd-4d98-810c-c9953d57fbb2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.070 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 0060f48c-30fd-4d98-810c-c9953d57fbb2 in datapath cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 unbound from our chassis
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.071 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.072 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[2f7c2167-d787-47da-b148-ce38eafa007d]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.073 28613 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 namespace which is not needed anymore
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:42 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Oct 09 16:39:42 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000001c.scope: Consumed 16.391s CPU time.
Oct 09 16:39:42 compute-0 systemd-machined[77487]: Machine qemu-22-instance-0000001c terminated.
Oct 09 16:39:42 compute-0 podman[153533]: 2025-10-09 16:39:42.145146677 +0000 UTC m=+0.052779322 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=watcher_latest)
Oct 09 16:39:42 compute-0 podman[153538]: 2025-10-09 16:39:42.152843361 +0000 UTC m=+0.057292075 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251007)
Oct 09 16:39:42 compute-0 neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8[152921]: [NOTICE]   (152925) : haproxy version is 3.0.5-8e879a5
Oct 09 16:39:42 compute-0 neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8[152921]: [NOTICE]   (152925) : path to executable is /usr/sbin/haproxy
Oct 09 16:39:42 compute-0 neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8[152921]: [WARNING]  (152925) : Exiting Master process...
Oct 09 16:39:42 compute-0 podman[153590]: 2025-10-09 16:39:42.176910973 +0000 UTC m=+0.026991175 container kill ac329dbdf5122e2af537b1cabcd723e52c3045b322dc8e533feff49f9291e1f5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:39:42 compute-0 neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8[152921]: [ALERT]    (152925) : Current worker (152927) exited with code 143 (Terminated)
Oct 09 16:39:42 compute-0 neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8[152921]: [WARNING]  (152925) : All workers exited. Exiting... (0)
Oct 09 16:39:42 compute-0 systemd[1]: libpod-ac329dbdf5122e2af537b1cabcd723e52c3045b322dc8e533feff49f9291e1f5.scope: Deactivated successfully.
Oct 09 16:39:42 compute-0 podman[153607]: 2025-10-09 16:39:42.211326643 +0000 UTC m=+0.020824031 container died ac329dbdf5122e2af537b1cabcd723e52c3045b322dc8e533feff49f9291e1f5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:39:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac329dbdf5122e2af537b1cabcd723e52c3045b322dc8e533feff49f9291e1f5-userdata-shm.mount: Deactivated successfully.
Oct 09 16:39:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f933023e944918b30dd1d938c5d51661bb6b80909b10c9cb85c412159a998cf-merged.mount: Deactivated successfully.
Oct 09 16:39:42 compute-0 kernel: tap0060f48c-30: entered promiscuous mode
Oct 09 16:39:42 compute-0 kernel: tap0060f48c-30 (unregistering): left promiscuous mode
Oct 09 16:39:42 compute-0 NetworkManager[1028]: <info>  [1760027982.2545] manager: (tap0060f48c-30): new Tun device (/org/freedesktop/NetworkManager/Devices/100)
Oct 09 16:39:42 compute-0 ovn_controller[19752]: 2025-10-09T16:39:42Z|00269|binding|INFO|Claiming lport 0060f48c-30fd-4d98-810c-c9953d57fbb2 for this chassis.
Oct 09 16:39:42 compute-0 ovn_controller[19752]: 2025-10-09T16:39:42Z|00270|binding|INFO|0060f48c-30fd-4d98-810c-c9953d57fbb2: Claiming fa:16:3e:36:08:54 10.100.0.12
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.262 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:08:54 10.100.0.12'], port_security=['fa:16:3e:36:08:54 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '2bd8bf21-1f6b-42c9-9656-9a72fa8dcbf3'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5963f55d-5e3f-4b07-86ef-554333267b8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a60acfe52e4b4b7f912654a59f0978b7', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'c47ae156-dc3a-4eb3-8eb2-126be8ba5497', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f19947fc-c2ef-4762-adfc-471bb24a038d, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=0060f48c-30fd-4d98-810c-c9953d57fbb2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:39:42 compute-0 podman[153607]: 2025-10-09 16:39:42.265775106 +0000 UTC m=+0.075272484 container cleanup ac329dbdf5122e2af537b1cabcd723e52c3045b322dc8e533feff49f9291e1f5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 09 16:39:42 compute-0 systemd[1]: libpod-conmon-ac329dbdf5122e2af537b1cabcd723e52c3045b322dc8e533feff49f9291e1f5.scope: Deactivated successfully.
Oct 09 16:39:42 compute-0 ovn_controller[19752]: 2025-10-09T16:39:42Z|00271|binding|INFO|Setting lport 0060f48c-30fd-4d98-810c-c9953d57fbb2 ovn-installed in OVS
Oct 09 16:39:42 compute-0 ovn_controller[19752]: 2025-10-09T16:39:42Z|00272|binding|INFO|Setting lport 0060f48c-30fd-4d98-810c-c9953d57fbb2 up in Southbound
Oct 09 16:39:42 compute-0 ovn_controller[19752]: 2025-10-09T16:39:42Z|00273|binding|INFO|Releasing lport 0060f48c-30fd-4d98-810c-c9953d57fbb2 from this chassis (sb_readonly=1)
Oct 09 16:39:42 compute-0 ovn_controller[19752]: 2025-10-09T16:39:42Z|00274|if_status|INFO|Dropped 2 log messages in last 686 seconds (most recently, 686 seconds ago) due to excessive rate
Oct 09 16:39:42 compute-0 ovn_controller[19752]: 2025-10-09T16:39:42Z|00275|if_status|INFO|Not setting lport 0060f48c-30fd-4d98-810c-c9953d57fbb2 down as sb is readonly
Oct 09 16:39:42 compute-0 ovn_controller[19752]: 2025-10-09T16:39:42Z|00276|binding|INFO|Removing iface tap0060f48c-30 ovn-installed in OVS
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:42 compute-0 ovn_controller[19752]: 2025-10-09T16:39:42Z|00277|binding|INFO|Releasing lport 0060f48c-30fd-4d98-810c-c9953d57fbb2 from this chassis (sb_readonly=0)
Oct 09 16:39:42 compute-0 ovn_controller[19752]: 2025-10-09T16:39:42Z|00278|binding|INFO|Setting lport 0060f48c-30fd-4d98-810c-c9953d57fbb2 down in Southbound
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.283 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:08:54 10.100.0.12'], port_security=['fa:16:3e:36:08:54 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '2bd8bf21-1f6b-42c9-9656-9a72fa8dcbf3'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5963f55d-5e3f-4b07-86ef-554333267b8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a60acfe52e4b4b7f912654a59f0978b7', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'c47ae156-dc3a-4eb3-8eb2-126be8ba5497', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f19947fc-c2ef-4762-adfc-471bb24a038d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=0060f48c-30fd-4d98-810c-c9953d57fbb2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:42 compute-0 podman[153609]: 2025-10-09 16:39:42.293398071 +0000 UTC m=+0.093437529 container remove ac329dbdf5122e2af537b1cabcd723e52c3045b322dc8e533feff49f9291e1f5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.298 2 DEBUG nova.virt.libvirt.guest [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.298 2 INFO nova.virt.libvirt.driver [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Migration operation has completed
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.298 2 INFO nova.compute.manager [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] _post_live_migration() is started..
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.299 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[38d3a742-1c69-445f-9bbd-60efe17699d8]: (4, ("Thu Oct  9 04:39:42 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 (ac329dbdf5122e2af537b1cabcd723e52c3045b322dc8e533feff49f9291e1f5)\nac329dbdf5122e2af537b1cabcd723e52c3045b322dc8e533feff49f9291e1f5\nThu Oct  9 04:39:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 (ac329dbdf5122e2af537b1cabcd723e52c3045b322dc8e533feff49f9291e1f5)\nac329dbdf5122e2af537b1cabcd723e52c3045b322dc8e533feff49f9291e1f5\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.300 2 DEBUG nova.virt.libvirt.driver [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.301 2 DEBUG nova.virt.libvirt.driver [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.301 2 DEBUG nova.virt.libvirt.driver [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.301 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[354a079b-4c7c-45ad-aa61-50c5542d466e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.301 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.302 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd5b20b-2e9e-49d5-8a93-173a35095cc1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.302 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf3aa351-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.310 2 WARNING neutronclient.v2_0.client [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.310 2 WARNING neutronclient.v2_0.client [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:42 compute-0 kernel: tapcf3aa351-40: left promiscuous mode
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.321 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b24523-f580-412a-8063-ff6fc82d735a]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.361 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb62518-a994-453e-b9b1-557b76ee8b62]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.362 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb888ae-12da-408f-9127-a0e4303c1380]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.376 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3227d9c0-2fec-4902-9c30-346ce749bf3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 282933, 'reachable_time': 23800, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 153658, 'error': None, 'target': 'ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.378 28727 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.378 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[30e257fc-e64b-42a2-9751-1b94baa50459]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.378 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 0060f48c-30fd-4d98-810c-c9953d57fbb2 in datapath cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 unbound from our chassis
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.379 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:39:42 compute-0 systemd[1]: run-netns-ovnmeta\x2dcf3aa351\x2d4d26\x2d41f3\x2d8cb5\x2d1ff2d3d995c8.mount: Deactivated successfully.
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.380 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[bb6084d9-8a10-4317-a22d-60ddc6605cf8]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.380 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 0060f48c-30fd-4d98-810c-c9953d57fbb2 in datapath cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8 unbound from our chassis
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.381 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:39:42 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:39:42.381 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8f70bd-e211-4eb8-80e6-726658115e1f]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.470 2 DEBUG nova.compute.manager [req-18e4bdc4-bea7-415c-aeb1-1f2f678139d1 req-7662a1e9-279d-4fa7-ba3f-ef734cd5d78f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received event network-vif-unplugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.470 2 DEBUG oslo_concurrency.lockutils [req-18e4bdc4-bea7-415c-aeb1-1f2f678139d1 req-7662a1e9-279d-4fa7-ba3f-ef734cd5d78f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.470 2 DEBUG oslo_concurrency.lockutils [req-18e4bdc4-bea7-415c-aeb1-1f2f678139d1 req-7662a1e9-279d-4fa7-ba3f-ef734cd5d78f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.471 2 DEBUG oslo_concurrency.lockutils [req-18e4bdc4-bea7-415c-aeb1-1f2f678139d1 req-7662a1e9-279d-4fa7-ba3f-ef734cd5d78f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.472 2 DEBUG nova.compute.manager [req-18e4bdc4-bea7-415c-aeb1-1f2f678139d1 req-7662a1e9-279d-4fa7-ba3f-ef734cd5d78f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] No waiting events found dispatching network-vif-unplugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.472 2 DEBUG nova.compute.manager [req-18e4bdc4-bea7-415c-aeb1-1f2f678139d1 req-7662a1e9-279d-4fa7-ba3f-ef734cd5d78f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received event network-vif-unplugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.687 2 DEBUG nova.network.neutron [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Activated binding for port 0060f48c-30fd-4d98-810c-c9953d57fbb2 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.687 2 DEBUG nova.compute.manager [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "address": "fa:16:3e:36:08:54", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0060f48c-30", "ovs_interfaceid": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.688 2 DEBUG nova.virt.libvirt.vif [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:37:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-1807729568',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-1807729',id=28,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:38:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a60acfe52e4b4b7f912654a59f0978b7',ramdisk_id='',reservation_id='r-hkns0u0y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader,manager',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-1000021673-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:38:51Z,user_data=None,user_id='685b4924c5a04af7ae6f4a328bb50f14',uuid=5963f55d-5e3f-4b07-86ef-554333267b8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "address": "fa:16:3e:36:08:54", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0060f48c-30", "ovs_interfaceid": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.688 2 DEBUG nova.network.os_vif_util [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "address": "fa:16:3e:36:08:54", "network": {"id": "cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-105109230-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b6d650a71ca47d58b10f5fd874c4898", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0060f48c-30", "ovs_interfaceid": "0060f48c-30fd-4d98-810c-c9953d57fbb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.689 2 DEBUG nova.network.os_vif_util [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:08:54,bridge_name='br-int',has_traffic_filtering=True,id=0060f48c-30fd-4d98-810c-c9953d57fbb2,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0060f48c-30') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.689 2 DEBUG os_vif [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:08:54,bridge_name='br-int',has_traffic_filtering=True,id=0060f48c-30fd-4d98-810c-c9953d57fbb2,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0060f48c-30') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.691 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0060f48c-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.695 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=cc13877d-48da-4cfc-af44-8d56694e6945) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.698 2 INFO os_vif [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:08:54,bridge_name='br-int',has_traffic_filtering=True,id=0060f48c-30fd-4d98-810c-c9953d57fbb2,network=Network(cf3aa351-4d26-41f3-8cb5-1ff2d3d995c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0060f48c-30')
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.698 2 DEBUG oslo_concurrency.lockutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.699 2 DEBUG oslo_concurrency.lockutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.699 2 DEBUG oslo_concurrency.lockutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.699 2 DEBUG nova.compute.manager [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.699 2 INFO nova.virt.libvirt.driver [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Deleting instance files /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f_del
Oct 09 16:39:42 compute-0 nova_compute[117331]: 2025-10-09 16:39:42.700 2 INFO nova.virt.libvirt.driver [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Deletion of /var/lib/nova/instances/5963f55d-5e3f-4b07-86ef-554333267b8f_del complete
Oct 09 16:39:43 compute-0 nova_compute[117331]: 2025-10-09 16:39:43.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:39:43 compute-0 sshd-session[153485]: Invalid user a from 124.60.67.43 port 36072
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.526 2 DEBUG nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received event network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.526 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.527 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.527 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.527 2 DEBUG nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] No waiting events found dispatching network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.527 2 WARNING nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received unexpected event network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 for instance with vm_state active and task_state migrating.
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.527 2 DEBUG nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received event network-vif-unplugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.528 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.528 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.528 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.528 2 DEBUG nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] No waiting events found dispatching network-vif-unplugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.528 2 DEBUG nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received event network-vif-unplugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.529 2 DEBUG nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received event network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.529 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.529 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.529 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.530 2 DEBUG nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] No waiting events found dispatching network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.530 2 WARNING nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received unexpected event network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 for instance with vm_state active and task_state migrating.
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.530 2 DEBUG nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received event network-vif-unplugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.530 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.530 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.530 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.531 2 DEBUG nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] No waiting events found dispatching network-vif-unplugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.531 2 DEBUG nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received event network-vif-unplugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.531 2 DEBUG nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received event network-vif-unplugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.531 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.531 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.532 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.532 2 DEBUG nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] No waiting events found dispatching network-vif-unplugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.532 2 DEBUG nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received event network-vif-unplugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.532 2 DEBUG nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received event network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.532 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.532 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.533 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.533 2 DEBUG nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] No waiting events found dispatching network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.533 2 WARNING nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received unexpected event network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 for instance with vm_state active and task_state migrating.
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.533 2 DEBUG nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received event network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.533 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.534 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.534 2 DEBUG oslo_concurrency.lockutils [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.534 2 DEBUG nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] No waiting events found dispatching network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:39:44 compute-0 nova_compute[117331]: 2025-10-09 16:39:44.534 2 WARNING nova.compute.manager [req-5126ae81-c3b2-42f6-9b43-a7953627428c req-8f1d4ba5-93ad-4bcd-a935-1e4fcf55bd59 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Received unexpected event network-vif-plugged-0060f48c-30fd-4d98-810c-c9953d57fbb2 for instance with vm_state active and task_state migrating.
Oct 09 16:39:45 compute-0 sshd-session[153485]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:39:45 compute-0 sshd-session[153485]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43
Oct 09 16:39:46 compute-0 sshd-session[153485]: Failed password for invalid user a from 124.60.67.43 port 36072 ssh2
Oct 09 16:39:47 compute-0 nova_compute[117331]: 2025-10-09 16:39:47.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:47 compute-0 nova_compute[117331]: 2025-10-09 16:39:47.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:49 compute-0 sshd-session[153485]: Connection closed by invalid user a 124.60.67.43 port 36072 [preauth]
Oct 09 16:39:51 compute-0 nova_compute[117331]: 2025-10-09 16:39:51.234 2 DEBUG oslo_concurrency.lockutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:51 compute-0 nova_compute[117331]: 2025-10-09 16:39:51.235 2 DEBUG oslo_concurrency.lockutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:51 compute-0 nova_compute[117331]: 2025-10-09 16:39:51.236 2 DEBUG oslo_concurrency.lockutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "5963f55d-5e3f-4b07-86ef-554333267b8f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:51 compute-0 nova_compute[117331]: 2025-10-09 16:39:51.747 2 DEBUG oslo_concurrency.lockutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:51 compute-0 nova_compute[117331]: 2025-10-09 16:39:51.747 2 DEBUG oslo_concurrency.lockutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:51 compute-0 nova_compute[117331]: 2025-10-09 16:39:51.747 2 DEBUG oslo_concurrency.lockutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:51 compute-0 nova_compute[117331]: 2025-10-09 16:39:51.748 2 DEBUG nova.compute.resource_tracker [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:39:51 compute-0 podman[153661]: 2025-10-09 16:39:51.865538673 +0000 UTC m=+0.086358655 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, managed_by=edpm_ansible)
Oct 09 16:39:51 compute-0 podman[153662]: 2025-10-09 16:39:51.928946468 +0000 UTC m=+0.141665963 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 09 16:39:51 compute-0 nova_compute[117331]: 2025-10-09 16:39:51.941 2 WARNING nova.virt.libvirt.driver [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:39:51 compute-0 nova_compute[117331]: 2025-10-09 16:39:51.942 2 DEBUG oslo_concurrency.processutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:39:51 compute-0 nova_compute[117331]: 2025-10-09 16:39:51.985 2 DEBUG oslo_concurrency.processutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.043s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:39:51 compute-0 nova_compute[117331]: 2025-10-09 16:39:51.986 2 DEBUG nova.compute.resource_tracker [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6139MB free_disk=73.24930191040039GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:39:51 compute-0 nova_compute[117331]: 2025-10-09 16:39:51.986 2 DEBUG oslo_concurrency.lockutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:39:51 compute-0 nova_compute[117331]: 2025-10-09 16:39:51.986 2 DEBUG oslo_concurrency.lockutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:39:52 compute-0 nova_compute[117331]: 2025-10-09 16:39:52.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:52 compute-0 nova_compute[117331]: 2025-10-09 16:39:52.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:53 compute-0 nova_compute[117331]: 2025-10-09 16:39:53.002 2 DEBUG nova.compute.resource_tracker [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration for instance 5963f55d-5e3f-4b07-86ef-554333267b8f refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 09 16:39:53 compute-0 sshd-session[153659]: Invalid user nil from 124.60.67.43 port 60470
Oct 09 16:39:53 compute-0 nova_compute[117331]: 2025-10-09 16:39:53.511 2 DEBUG nova.compute.resource_tracker [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Oct 09 16:39:53 compute-0 nova_compute[117331]: 2025-10-09 16:39:53.540 2 DEBUG nova.compute.resource_tracker [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration 23cf222b-3267-4a00-82ba-480cb5d9bc11 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:39:53 compute-0 nova_compute[117331]: 2025-10-09 16:39:53.541 2 DEBUG nova.compute.resource_tracker [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:39:53 compute-0 nova_compute[117331]: 2025-10-09 16:39:53.542 2 DEBUG nova.compute.resource_tracker [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:39:51 up 48 min,  0 user,  load average: 0.25, 0.44, 0.45\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:39:53 compute-0 nova_compute[117331]: 2025-10-09 16:39:53.575 2 DEBUG nova.compute.provider_tree [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:39:54 compute-0 nova_compute[117331]: 2025-10-09 16:39:54.084 2 DEBUG nova.scheduler.client.report [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:39:54 compute-0 nova_compute[117331]: 2025-10-09 16:39:54.595 2 DEBUG nova.compute.resource_tracker [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:39:54 compute-0 nova_compute[117331]: 2025-10-09 16:39:54.595 2 DEBUG oslo_concurrency.lockutils [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.609s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:39:54 compute-0 nova_compute[117331]: 2025-10-09 16:39:54.609 2 INFO nova.compute.manager [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Oct 09 16:39:55 compute-0 sshd-session[153659]: Failed none for invalid user nil from 124.60.67.43 port 60470 ssh2
Oct 09 16:39:55 compute-0 nova_compute[117331]: 2025-10-09 16:39:55.667 2 INFO nova.scheduler.client.report [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Deleted allocation for migration 23cf222b-3267-4a00-82ba-480cb5d9bc11
Oct 09 16:39:55 compute-0 nova_compute[117331]: 2025-10-09 16:39:55.667 2 DEBUG nova.virt.libvirt.driver [None req-b13a42ce-7734-4c51-aa6a-4687cb3b7ca3 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 5963f55d-5e3f-4b07-86ef-554333267b8f] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Oct 09 16:39:56 compute-0 sshd-session[153659]: Connection closed by invalid user nil 124.60.67.43 port 60470 [preauth]
Oct 09 16:39:57 compute-0 nova_compute[117331]: 2025-10-09 16:39:57.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:57 compute-0 nova_compute[117331]: 2025-10-09 16:39:57.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:39:59 compute-0 podman[127775]: time="2025-10-09T16:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:39:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:39:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3033 "" "Go-http-client/1.1"
Oct 09 16:40:01 compute-0 openstack_network_exporter[129925]: ERROR   16:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:40:01 compute-0 openstack_network_exporter[129925]: ERROR   16:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:40:01 compute-0 openstack_network_exporter[129925]: ERROR   16:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:40:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:40:01 compute-0 openstack_network_exporter[129925]: ERROR   16:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:40:01 compute-0 openstack_network_exporter[129925]: ERROR   16:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:40:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:40:02 compute-0 nova_compute[117331]: 2025-10-09 16:40:02.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:02 compute-0 nova_compute[117331]: 2025-10-09 16:40:02.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:02 compute-0 systemd[1]: Starting system activity accounting tool...
Oct 09 16:40:02 compute-0 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 09 16:40:02 compute-0 systemd[1]: Finished system activity accounting tool.
Oct 09 16:40:02 compute-0 podman[153710]: 2025-10-09 16:40:02.845424378 +0000 UTC m=+0.077807534 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 09 16:40:07 compute-0 nova_compute[117331]: 2025-10-09 16:40:07.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:07 compute-0 nova_compute[117331]: 2025-10-09 16:40:07.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:07 compute-0 podman[153734]: 2025-10-09 16:40:07.860584515 +0000 UTC m=+0.062295971 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:40:10 compute-0 sshd-session[153732]: Invalid user admin from 124.60.67.43 port 40244
Oct 09 16:40:10 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:40:10.622 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:40:10 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:40:10.623 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:40:10 compute-0 nova_compute[117331]: 2025-10-09 16:40:10.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:12 compute-0 nova_compute[117331]: 2025-10-09 16:40:12.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:12 compute-0 nova_compute[117331]: 2025-10-09 16:40:12.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:12 compute-0 podman[153760]: 2025-10-09 16:40:12.860731654 +0000 UTC m=+0.079376114 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4)
Oct 09 16:40:12 compute-0 podman[153761]: 2025-10-09 16:40:12.90844161 +0000 UTC m=+0.120386827 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 09 16:40:13 compute-0 sshd-session[153732]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:40:13 compute-0 sshd-session[153732]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43
Oct 09 16:40:13 compute-0 nova_compute[117331]: 2025-10-09 16:40:13.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:13 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:40:13.624 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:40:15 compute-0 sshd-session[153732]: Failed password for invalid user admin from 124.60.67.43 port 40244 ssh2
Oct 09 16:40:17 compute-0 sshd-session[153732]: Connection closed by invalid user admin 124.60.67.43 port 40244 [preauth]
Oct 09 16:40:17 compute-0 nova_compute[117331]: 2025-10-09 16:40:17.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:17 compute-0 nova_compute[117331]: 2025-10-09 16:40:17.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:22 compute-0 nova_compute[117331]: 2025-10-09 16:40:22.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:22 compute-0 nova_compute[117331]: 2025-10-09 16:40:22.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:22 compute-0 podman[153796]: 2025-10-09 16:40:22.857358735 +0000 UTC m=+0.078178485 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.buildah.version=1.33.7, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, version=9.6, managed_by=edpm_ansible, io.openshift.expose-services=)
Oct 09 16:40:22 compute-0 podman[153797]: 2025-10-09 16:40:22.881276314 +0000 UTC m=+0.102248129 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.build-date=20251007)
Oct 09 16:40:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:40:22.919 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:79:49 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '655616b8f80249ceab702bd3d943237d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a2d5ec5-dae5-4df8-b843-efe172a6e533, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=71d0467d-0f77-43e1-a20d-172a49f9f770) old=Port_Binding(mac=['fa:16:3e:6e:79:49'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '655616b8f80249ceab702bd3d943237d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:40:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:40:22.920 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 71d0467d-0f77-43e1-a20d-172a49f9f770 in datapath faa2f899-e3f1-48c6-ac37-859a6fb5c6cc updated
Oct 09 16:40:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:40:22.921 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network faa2f899-e3f1-48c6-ac37-859a6fb5c6cc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:40:22 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:40:22.922 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[38722652-7edf-4832-be2c-94feb9a9434b]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:40:27 compute-0 nova_compute[117331]: 2025-10-09 16:40:27.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:27 compute-0 nova_compute[117331]: 2025-10-09 16:40:27.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:28 compute-0 unix_chkpwd[153844]: password check failed for user (root)
Oct 09 16:40:28 compute-0 sshd-session[153842]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43  user=root
Oct 09 16:40:29 compute-0 podman[127775]: time="2025-10-09T16:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:40:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:40:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3029 "" "Go-http-client/1.1"
Oct 09 16:40:30 compute-0 sshd-session[153842]: Failed password for root from 124.60.67.43 port 56790 ssh2
Oct 09 16:40:31 compute-0 openstack_network_exporter[129925]: ERROR   16:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:40:31 compute-0 openstack_network_exporter[129925]: ERROR   16:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:40:31 compute-0 openstack_network_exporter[129925]: ERROR   16:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:40:31 compute-0 openstack_network_exporter[129925]: ERROR   16:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:40:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:40:31 compute-0 openstack_network_exporter[129925]: ERROR   16:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:40:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:40:32 compute-0 nova_compute[117331]: 2025-10-09 16:40:32.308 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:40:32 compute-0 nova_compute[117331]: 2025-10-09 16:40:32.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:32 compute-0 nova_compute[117331]: 2025-10-09 16:40:32.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:33 compute-0 sshd-session[153842]: Connection closed by authenticating user root 124.60.67.43 port 56790 [preauth]
Oct 09 16:40:33 compute-0 nova_compute[117331]: 2025-10-09 16:40:33.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:40:33 compute-0 podman[153845]: 2025-10-09 16:40:33.856929847 +0000 UTC m=+0.081099438 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct 09 16:40:34 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:40:34.532 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:bf:c7 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0242360f-b57f-4d7a-909e-487c5b78497f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0242360f-b57f-4d7a-909e-487c5b78497f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fca25ca2f317463c909e30c8b2c188d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e1804b-ec2f-4a5f-bea8-a38d4bedfa95, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2d000a2d-f6c6-45a8-ac0a-23d932224b1d) old=Port_Binding(mac=['fa:16:3e:87:bf:c7'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-0242360f-b57f-4d7a-909e-487c5b78497f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0242360f-b57f-4d7a-909e-487c5b78497f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fca25ca2f317463c909e30c8b2c188d1', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:40:34 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:40:34.533 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2d000a2d-f6c6-45a8-ac0a-23d932224b1d in datapath 0242360f-b57f-4d7a-909e-487c5b78497f updated
Oct 09 16:40:34 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:40:34.535 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0242360f-b57f-4d7a-909e-487c5b78497f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:40:34 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:40:34.536 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb28398-947b-467d-b297-1c7f64f1ebd8]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:40:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:40:35.340 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:40:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:40:35.340 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:40:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:40:35.340 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:40:37 compute-0 nova_compute[117331]: 2025-10-09 16:40:37.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:40:37 compute-0 nova_compute[117331]: 2025-10-09 16:40:37.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:40:37 compute-0 nova_compute[117331]: 2025-10-09 16:40:37.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:37 compute-0 nova_compute[117331]: 2025-10-09 16:40:37.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:37 compute-0 nova_compute[117331]: 2025-10-09 16:40:37.833 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:40:37 compute-0 nova_compute[117331]: 2025-10-09 16:40:37.834 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:40:37 compute-0 nova_compute[117331]: 2025-10-09 16:40:37.834 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:40:37 compute-0 nova_compute[117331]: 2025-10-09 16:40:37.834 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:40:37 compute-0 nova_compute[117331]: 2025-10-09 16:40:37.997 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:40:37 compute-0 nova_compute[117331]: 2025-10-09 16:40:37.998 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:40:38 compute-0 nova_compute[117331]: 2025-10-09 16:40:38.030 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.032s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:40:38 compute-0 nova_compute[117331]: 2025-10-09 16:40:38.031 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6152MB free_disk=73.24929809570312GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:40:38 compute-0 nova_compute[117331]: 2025-10-09 16:40:38.031 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:40:38 compute-0 nova_compute[117331]: 2025-10-09 16:40:38.032 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:40:38 compute-0 podman[153871]: 2025-10-09 16:40:38.823634174 +0000 UTC m=+0.055314768 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 09 16:40:39 compute-0 nova_compute[117331]: 2025-10-09 16:40:39.103 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:40:39 compute-0 nova_compute[117331]: 2025-10-09 16:40:39.103 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:40:38 up 49 min,  0 user,  load average: 0.23, 0.41, 0.44\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:40:39 compute-0 nova_compute[117331]: 2025-10-09 16:40:39.122 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:40:39 compute-0 nova_compute[117331]: 2025-10-09 16:40:39.630 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:40:40 compute-0 nova_compute[117331]: 2025-10-09 16:40:40.139 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:40:40 compute-0 nova_compute[117331]: 2025-10-09 16:40:40.140 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.109s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:40:41 compute-0 nova_compute[117331]: 2025-10-09 16:40:41.136 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:40:41 compute-0 nova_compute[117331]: 2025-10-09 16:40:41.137 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:40:41 compute-0 nova_compute[117331]: 2025-10-09 16:40:41.137 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:40:41 compute-0 sshd-session[153866]: Invalid user orangepi from 124.60.67.43 port 60648
Oct 09 16:40:42 compute-0 nova_compute[117331]: 2025-10-09 16:40:42.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:42 compute-0 sshd-session[153866]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:40:42 compute-0 sshd-session[153866]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43
Oct 09 16:40:42 compute-0 nova_compute[117331]: 2025-10-09 16:40:42.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:43 compute-0 podman[153895]: 2025-10-09 16:40:43.824010281 +0000 UTC m=+0.057407686 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 09 16:40:43 compute-0 podman[153896]: 2025-10-09 16:40:43.853775836 +0000 UTC m=+0.082797182 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 09 16:40:44 compute-0 nova_compute[117331]: 2025-10-09 16:40:44.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:40:44 compute-0 sshd-session[153866]: Failed password for invalid user orangepi from 124.60.67.43 port 60648 ssh2
Oct 09 16:40:45 compute-0 nova_compute[117331]: 2025-10-09 16:40:45.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:40:46 compute-0 nova_compute[117331]: 2025-10-09 16:40:46.303 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:40:46 compute-0 sshd-session[153866]: Connection closed by invalid user orangepi 124.60.67.43 port 60648 [preauth]
Oct 09 16:40:46 compute-0 ovn_controller[19752]: 2025-10-09T16:40:46Z|00279|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct 09 16:40:46 compute-0 nova_compute[117331]: 2025-10-09 16:40:46.848 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Acquiring lock "e49a1e89-0c38-4887-9bdf-64f07982ab22" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:40:46 compute-0 nova_compute[117331]: 2025-10-09 16:40:46.848 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:40:47 compute-0 nova_compute[117331]: 2025-10-09 16:40:47.355 2 DEBUG nova.compute.manager [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:40:47 compute-0 nova_compute[117331]: 2025-10-09 16:40:47.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:47 compute-0 nova_compute[117331]: 2025-10-09 16:40:47.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:47 compute-0 nova_compute[117331]: 2025-10-09 16:40:47.962 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:40:47 compute-0 nova_compute[117331]: 2025-10-09 16:40:47.964 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:40:47 compute-0 nova_compute[117331]: 2025-10-09 16:40:47.972 2 DEBUG nova.virt.hardware [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:40:47 compute-0 nova_compute[117331]: 2025-10-09 16:40:47.973 2 INFO nova.compute.claims [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:40:49 compute-0 nova_compute[117331]: 2025-10-09 16:40:49.024 2 DEBUG nova.compute.provider_tree [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:40:49 compute-0 nova_compute[117331]: 2025-10-09 16:40:49.531 2 DEBUG nova.scheduler.client.report [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:40:50 compute-0 nova_compute[117331]: 2025-10-09 16:40:50.044 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.080s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:40:50 compute-0 nova_compute[117331]: 2025-10-09 16:40:50.046 2 DEBUG nova.compute.manager [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:40:50 compute-0 nova_compute[117331]: 2025-10-09 16:40:50.558 2 DEBUG nova.compute.manager [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:40:50 compute-0 nova_compute[117331]: 2025-10-09 16:40:50.559 2 DEBUG nova.network.neutron [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:40:50 compute-0 nova_compute[117331]: 2025-10-09 16:40:50.560 2 WARNING neutronclient.v2_0.client [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:40:50 compute-0 nova_compute[117331]: 2025-10-09 16:40:50.561 2 WARNING neutronclient.v2_0.client [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:40:51 compute-0 nova_compute[117331]: 2025-10-09 16:40:51.068 2 INFO nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:40:51 compute-0 nova_compute[117331]: 2025-10-09 16:40:51.580 2 DEBUG nova.compute.manager [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.267 2 DEBUG nova.network.neutron [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Successfully created port: 3bf58985-c8d6-4c25-bed7-5fe8a78a1239 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.600 2 DEBUG nova.compute.manager [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.601 2 DEBUG nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.602 2 INFO nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Creating image(s)
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.603 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Acquiring lock "/var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.603 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "/var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.604 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "/var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.605 2 DEBUG oslo_utils.imageutils.format_inspector [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.609 2 DEBUG oslo_utils.imageutils.format_inspector [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.612 2 DEBUG oslo_concurrency.processutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.699 2 DEBUG oslo_concurrency.processutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.700 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.701 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.702 2 DEBUG oslo_utils.imageutils.format_inspector [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.708 2 DEBUG oslo_utils.imageutils.format_inspector [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.709 2 DEBUG oslo_concurrency.processutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.785 2 DEBUG oslo_concurrency.processutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.786 2 DEBUG oslo_concurrency.processutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.821 2 DEBUG oslo_concurrency.processutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk 1073741824" returned: 0 in 0.035s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.823 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.121s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.823 2 DEBUG oslo_concurrency.processutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.876 2 DEBUG oslo_concurrency.processutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.877 2 DEBUG nova.virt.disk.api [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Checking if we can resize image /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.877 2 DEBUG oslo_concurrency.processutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.931 2 DEBUG oslo_concurrency.processutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.932 2 DEBUG nova.virt.disk.api [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Cannot resize image /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.933 2 DEBUG nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.933 2 DEBUG nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Ensure instance console log exists: /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.934 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.934 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:40:52 compute-0 nova_compute[117331]: 2025-10-09 16:40:52.935 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:40:53 compute-0 nova_compute[117331]: 2025-10-09 16:40:53.082 2 DEBUG nova.network.neutron [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Successfully updated port: 3bf58985-c8d6-4c25-bed7-5fe8a78a1239 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:40:53 compute-0 nova_compute[117331]: 2025-10-09 16:40:53.143 2 DEBUG nova.compute.manager [req-1a23e084-5601-4f0b-8d64-ea9f77928cc4 req-0f9d9803-7d45-49fb-9105-fcb9fd61b89f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Received event network-changed-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:40:53 compute-0 nova_compute[117331]: 2025-10-09 16:40:53.143 2 DEBUG nova.compute.manager [req-1a23e084-5601-4f0b-8d64-ea9f77928cc4 req-0f9d9803-7d45-49fb-9105-fcb9fd61b89f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Refreshing instance network info cache due to event network-changed-3bf58985-c8d6-4c25-bed7-5fe8a78a1239. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:40:53 compute-0 nova_compute[117331]: 2025-10-09 16:40:53.144 2 DEBUG oslo_concurrency.lockutils [req-1a23e084-5601-4f0b-8d64-ea9f77928cc4 req-0f9d9803-7d45-49fb-9105-fcb9fd61b89f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-e49a1e89-0c38-4887-9bdf-64f07982ab22" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:40:53 compute-0 nova_compute[117331]: 2025-10-09 16:40:53.144 2 DEBUG oslo_concurrency.lockutils [req-1a23e084-5601-4f0b-8d64-ea9f77928cc4 req-0f9d9803-7d45-49fb-9105-fcb9fd61b89f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-e49a1e89-0c38-4887-9bdf-64f07982ab22" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:40:53 compute-0 nova_compute[117331]: 2025-10-09 16:40:53.144 2 DEBUG nova.network.neutron [req-1a23e084-5601-4f0b-8d64-ea9f77928cc4 req-0f9d9803-7d45-49fb-9105-fcb9fd61b89f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Refreshing network info cache for port 3bf58985-c8d6-4c25-bed7-5fe8a78a1239 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:40:53 compute-0 nova_compute[117331]: 2025-10-09 16:40:53.588 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Acquiring lock "refresh_cache-e49a1e89-0c38-4887-9bdf-64f07982ab22" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:40:53 compute-0 nova_compute[117331]: 2025-10-09 16:40:53.651 2 WARNING neutronclient.v2_0.client [req-1a23e084-5601-4f0b-8d64-ea9f77928cc4 req-0f9d9803-7d45-49fb-9105-fcb9fd61b89f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:40:53 compute-0 podman[153948]: 2025-10-09 16:40:53.871723225 +0000 UTC m=+0.097225850 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 09 16:40:53 compute-0 podman[153949]: 2025-10-09 16:40:53.908788053 +0000 UTC m=+0.131670065 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 09 16:40:54 compute-0 nova_compute[117331]: 2025-10-09 16:40:54.367 2 DEBUG nova.network.neutron [req-1a23e084-5601-4f0b-8d64-ea9f77928cc4 req-0f9d9803-7d45-49fb-9105-fcb9fd61b89f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:40:54 compute-0 nova_compute[117331]: 2025-10-09 16:40:54.519 2 DEBUG nova.network.neutron [req-1a23e084-5601-4f0b-8d64-ea9f77928cc4 req-0f9d9803-7d45-49fb-9105-fcb9fd61b89f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:40:55 compute-0 nova_compute[117331]: 2025-10-09 16:40:55.028 2 DEBUG oslo_concurrency.lockutils [req-1a23e084-5601-4f0b-8d64-ea9f77928cc4 req-0f9d9803-7d45-49fb-9105-fcb9fd61b89f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-e49a1e89-0c38-4887-9bdf-64f07982ab22" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:40:55 compute-0 nova_compute[117331]: 2025-10-09 16:40:55.029 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Acquired lock "refresh_cache-e49a1e89-0c38-4887-9bdf-64f07982ab22" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:40:55 compute-0 nova_compute[117331]: 2025-10-09 16:40:55.029 2 DEBUG nova.network.neutron [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:40:56 compute-0 nova_compute[117331]: 2025-10-09 16:40:56.284 2 DEBUG nova.network.neutron [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:40:56 compute-0 nova_compute[117331]: 2025-10-09 16:40:56.473 2 WARNING neutronclient.v2_0.client [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:40:56 compute-0 nova_compute[117331]: 2025-10-09 16:40:56.629 2 DEBUG nova.network.neutron [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Updating instance_info_cache with network_info: [{"id": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "address": "fa:16:3e:11:4c:7f", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bf58985-c8", "ovs_interfaceid": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.135 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Releasing lock "refresh_cache-e49a1e89-0c38-4887-9bdf-64f07982ab22" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.135 2 DEBUG nova.compute.manager [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Instance network_info: |[{"id": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "address": "fa:16:3e:11:4c:7f", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bf58985-c8", "ovs_interfaceid": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.138 2 DEBUG nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Start _get_guest_xml network_info=[{"id": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "address": "fa:16:3e:11:4c:7f", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bf58985-c8", "ovs_interfaceid": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.142 2 WARNING nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.143 2 DEBUG nova.virt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteZoneMigrationStrategy-server-484584536', uuid='e49a1e89-0c38-4887-9bdf-64f07982ab22'), owner=OwnerMeta(userid='05384d72bc894b20b1cc128bf76382b2', username='tempest-TestExecuteZoneMigrationStrategy-1067041161-project-admin', projectid='fca25ca2f317463c909e30c8b2c188d1', projectname='tempest-TestExecuteZoneMigrationStrategy-1067041161'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "address": "fa:16:3e:11:4c:7f", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bf58985-c8", "ovs_interfaceid": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760028057.1430779) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.146 2 DEBUG nova.virt.libvirt.host [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.147 2 DEBUG nova.virt.libvirt.host [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.150 2 DEBUG nova.virt.libvirt.host [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.150 2 DEBUG nova.virt.libvirt.host [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.150 2 DEBUG nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.151 2 DEBUG nova.virt.hardware [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.151 2 DEBUG nova.virt.hardware [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.151 2 DEBUG nova.virt.hardware [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.151 2 DEBUG nova.virt.hardware [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.152 2 DEBUG nova.virt.hardware [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.152 2 DEBUG nova.virt.hardware [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.152 2 DEBUG nova.virt.hardware [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.152 2 DEBUG nova.virt.hardware [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.152 2 DEBUG nova.virt.hardware [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.153 2 DEBUG nova.virt.hardware [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.153 2 DEBUG nova.virt.hardware [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.156 2 DEBUG nova.virt.libvirt.vif [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:40:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-484584536',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-484584536',id=30,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fca25ca2f317463c909e30c8b2c188d1',ramdisk_id='',reservation_id='r-4loi0xvy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-1067041161',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-1067041161-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:40:51Z,user_data=None,user_id='05384d72bc894b20b1cc128bf76382b2',uuid=e49a1e89-0c38-4887-9bdf-64f07982ab22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "address": "fa:16:3e:11:4c:7f", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bf58985-c8", "ovs_interfaceid": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.157 2 DEBUG nova.network.os_vif_util [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Converting VIF {"id": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "address": "fa:16:3e:11:4c:7f", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bf58985-c8", "ovs_interfaceid": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.157 2 DEBUG nova.network.os_vif_util [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:4c:7f,bridge_name='br-int',has_traffic_filtering=True,id=3bf58985-c8d6-4c25-bed7-5fe8a78a1239,network=Network(faa2f899-e3f1-48c6-ac37-859a6fb5c6cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bf58985-c8') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.158 2 DEBUG nova.objects.instance [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid e49a1e89-0c38-4887-9bdf-64f07982ab22 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.667 2 DEBUG nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:40:57 compute-0 nova_compute[117331]:   <uuid>e49a1e89-0c38-4887-9bdf-64f07982ab22</uuid>
Oct 09 16:40:57 compute-0 nova_compute[117331]:   <name>instance-0000001e</name>
Oct 09 16:40:57 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:40:57 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:40:57 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteZoneMigrationStrategy-server-484584536</nova:name>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:40:57</nova:creationTime>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:40:57 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:40:57 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:40:57 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:40:57 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:40:57 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:40:57 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:40:57 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:40:57 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:40:57 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:40:57 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:40:57 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:40:57 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:40:57 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:40:57 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:40:57 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:40:57 compute-0 nova_compute[117331]:         <nova:user uuid="05384d72bc894b20b1cc128bf76382b2">tempest-TestExecuteZoneMigrationStrategy-1067041161-project-admin</nova:user>
Oct 09 16:40:57 compute-0 nova_compute[117331]:         <nova:project uuid="fca25ca2f317463c909e30c8b2c188d1">tempest-TestExecuteZoneMigrationStrategy-1067041161</nova:project>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:40:57 compute-0 nova_compute[117331]:         <nova:port uuid="3bf58985-c8d6-4c25-bed7-5fe8a78a1239">
Oct 09 16:40:57 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:40:57 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:40:57 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <system>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <entry name="serial">e49a1e89-0c38-4887-9bdf-64f07982ab22</entry>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <entry name="uuid">e49a1e89-0c38-4887-9bdf-64f07982ab22</entry>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     </system>
Oct 09 16:40:57 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:40:57 compute-0 nova_compute[117331]:   <os>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:   </os>
Oct 09 16:40:57 compute-0 nova_compute[117331]:   <features>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:   </features>
Oct 09 16:40:57 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:40:57 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:40:57 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk.config"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:11:4c:7f"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <target dev="tap3bf58985-c8"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/console.log" append="off"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <video>
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     </video>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:40:57 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:40:57 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:40:57 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:40:57 compute-0 nova_compute[117331]: </domain>
Oct 09 16:40:57 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.669 2 DEBUG nova.compute.manager [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Preparing to wait for external event network-vif-plugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.670 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Acquiring lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.670 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.670 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.671 2 DEBUG nova.virt.libvirt.vif [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:40:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-484584536',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-484584536',id=30,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fca25ca2f317463c909e30c8b2c188d1',ramdisk_id='',reservation_id='r-4loi0xvy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-1067041161',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-1067041161-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:40:51Z,user_data=None,user_id='05384d72bc894b20b1cc128bf76382b2',uuid=e49a1e89-0c38-4887-9bdf-64f07982ab22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "address": "fa:16:3e:11:4c:7f", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bf58985-c8", "ovs_interfaceid": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.671 2 DEBUG nova.network.os_vif_util [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Converting VIF {"id": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "address": "fa:16:3e:11:4c:7f", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bf58985-c8", "ovs_interfaceid": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.672 2 DEBUG nova.network.os_vif_util [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:4c:7f,bridge_name='br-int',has_traffic_filtering=True,id=3bf58985-c8d6-4c25-bed7-5fe8a78a1239,network=Network(faa2f899-e3f1-48c6-ac37-859a6fb5c6cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bf58985-c8') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.672 2 DEBUG os_vif [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:4c:7f,bridge_name='br-int',has_traffic_filtering=True,id=3bf58985-c8d6-4c25-bed7-5fe8a78a1239,network=Network(faa2f899-e3f1-48c6-ac37-859a6fb5c6cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bf58985-c8') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.673 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.673 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.674 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '8b1ffdc6-4299-585e-a4ef-c63ebbf8b4c7', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.683 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bf58985-c8, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap3bf58985-c8, col_values=(('qos', UUID('2506c796-8703-4fd0-abc3-37633954b715')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap3bf58985-c8, col_values=(('external_ids', {'iface-id': '3bf58985-c8d6-4c25-bed7-5fe8a78a1239', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:4c:7f', 'vm-uuid': 'e49a1e89-0c38-4887-9bdf-64f07982ab22'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:57 compute-0 NetworkManager[1028]: <info>  [1760028057.6883] manager: (tap3bf58985-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:40:57 compute-0 nova_compute[117331]: 2025-10-09 16:40:57.696 2 INFO os_vif [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:4c:7f,bridge_name='br-int',has_traffic_filtering=True,id=3bf58985-c8d6-4c25-bed7-5fe8a78a1239,network=Network(faa2f899-e3f1-48c6-ac37-859a6fb5c6cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bf58985-c8')
Oct 09 16:40:59 compute-0 nova_compute[117331]: 2025-10-09 16:40:59.244 2 DEBUG nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:40:59 compute-0 nova_compute[117331]: 2025-10-09 16:40:59.245 2 DEBUG nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:40:59 compute-0 nova_compute[117331]: 2025-10-09 16:40:59.245 2 DEBUG nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] No VIF found with MAC fa:16:3e:11:4c:7f, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:40:59 compute-0 nova_compute[117331]: 2025-10-09 16:40:59.246 2 INFO nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Using config drive
Oct 09 16:40:59 compute-0 podman[127775]: time="2025-10-09T16:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:40:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:40:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3035 "" "Go-http-client/1.1"
Oct 09 16:40:59 compute-0 nova_compute[117331]: 2025-10-09 16:40:59.760 2 WARNING neutronclient.v2_0.client [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:41:00 compute-0 nova_compute[117331]: 2025-10-09 16:41:00.498 2 INFO nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Creating config drive at /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk.config
Oct 09 16:41:00 compute-0 nova_compute[117331]: 2025-10-09 16:41:00.509 2 DEBUG oslo_concurrency.processutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpit8hwq4h execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:41:00 compute-0 nova_compute[117331]: 2025-10-09 16:41:00.653 2 DEBUG oslo_concurrency.processutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpit8hwq4h" returned: 0 in 0.144s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:41:00 compute-0 kernel: tap3bf58985-c8: entered promiscuous mode
Oct 09 16:41:00 compute-0 NetworkManager[1028]: <info>  [1760028060.7395] manager: (tap3bf58985-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Oct 09 16:41:00 compute-0 nova_compute[117331]: 2025-10-09 16:41:00.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:00 compute-0 ovn_controller[19752]: 2025-10-09T16:41:00Z|00280|binding|INFO|Claiming lport 3bf58985-c8d6-4c25-bed7-5fe8a78a1239 for this chassis.
Oct 09 16:41:00 compute-0 ovn_controller[19752]: 2025-10-09T16:41:00Z|00281|binding|INFO|3bf58985-c8d6-4c25-bed7-5fe8a78a1239: Claiming fa:16:3e:11:4c:7f 10.100.0.3
Oct 09 16:41:00 compute-0 nova_compute[117331]: 2025-10-09 16:41:00.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:00 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:00.770 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:4c:7f 10.100.0.3'], port_security=['fa:16:3e:11:4c:7f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e49a1e89-0c38-4887-9bdf-64f07982ab22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fca25ca2f317463c909e30c8b2c188d1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a3264ad7-df47-48af-b997-2022c89ca53a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a2d5ec5-dae5-4df8-b843-efe172a6e533, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=3bf58985-c8d6-4c25-bed7-5fe8a78a1239) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:41:00 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:00.772 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 3bf58985-c8d6-4c25-bed7-5fe8a78a1239 in datapath faa2f899-e3f1-48c6-ac37-859a6fb5c6cc bound to our chassis
Oct 09 16:41:00 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:00.774 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network faa2f899-e3f1-48c6-ac37-859a6fb5c6cc
Oct 09 16:41:00 compute-0 systemd-udevd[154014]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:41:00 compute-0 systemd-machined[77487]: New machine qemu-24-instance-0000001e.
Oct 09 16:41:00 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:00.793 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[5a078004-d8e5-40d2-97fc-cbd29ab656fb]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:00 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:00.793 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfaa2f899-e1 in ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 09 16:41:00 compute-0 NetworkManager[1028]: <info>  [1760028060.7942] device (tap3bf58985-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:41:00 compute-0 NetworkManager[1028]: <info>  [1760028060.7949] device (tap3bf58985-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:41:00 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:00.796 139687 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfaa2f899-e0 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 09 16:41:00 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:00.796 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[5732d899-c5c5-4c7a-ba98-ff5c2a0ce25f]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:00 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:00.797 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[127bdc78-e072-406a-a549-68f92ba6f98a]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:00 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:00.814 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[f55982f4-46c4-4390-af51-3f300a7a3de4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:00 compute-0 ovn_controller[19752]: 2025-10-09T16:41:00Z|00282|binding|INFO|Setting lport 3bf58985-c8d6-4c25-bed7-5fe8a78a1239 ovn-installed in OVS
Oct 09 16:41:00 compute-0 ovn_controller[19752]: 2025-10-09T16:41:00Z|00283|binding|INFO|Setting lport 3bf58985-c8d6-4c25-bed7-5fe8a78a1239 up in Southbound
Oct 09 16:41:00 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-0000001e.
Oct 09 16:41:00 compute-0 nova_compute[117331]: 2025-10-09 16:41:00.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:00 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:00.837 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[8314b56a-f0d8-4722-adfd-da023024b279]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:00 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:00.875 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[8cbd3c30-f1ec-4ea4-a06d-571bc426aa88]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:00 compute-0 systemd-udevd[154018]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:41:00 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:00.878 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[98b4fcee-269f-42c1-9b7b-6a84a597170b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:00 compute-0 NetworkManager[1028]: <info>  [1760028060.8795] manager: (tapfaa2f899-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/103)
Oct 09 16:41:00 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:00.917 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[af9be0f5-5a73-46b1-9560-c41283036772]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:00 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:00.920 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[9eddee33-50ce-4f43-92c2-db3bf43beb3c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:00 compute-0 NetworkManager[1028]: <info>  [1760028060.9461] device (tapfaa2f899-e0): carrier: link connected
Oct 09 16:41:00 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:00.955 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[abfb2cd6-f120-4dda-abf8-267e8c65d117]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:00 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:00.979 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6a0c13-09ac-4c22-b8cc-cd1823399ada]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfaa2f899-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:79:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 300455, 'reachable_time': 22080, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 154048, 'error': None, 'target': 'ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:01.000 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ec276c-17ae-4e04-9638-2f0bf8aa1120]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:7949'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 300455, 'tstamp': 300455}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 154049, 'error': None, 'target': 'ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:01.024 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d03dea66-2811-416d-899d-6911a7525e41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfaa2f899-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:79:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 300455, 'reachable_time': 22080, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 154050, 'error': None, 'target': 'ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:01.066 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[c3fd77f2-0e6c-4722-baf1-b5100d73009a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:01.127 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8edcaa-683a-4574-81c9-424e75d6fad9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:01.128 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaa2f899-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:01.129 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:01.129 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfaa2f899-e0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:41:01 compute-0 nova_compute[117331]: 2025-10-09 16:41:01.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:01 compute-0 NetworkManager[1028]: <info>  [1760028061.1319] manager: (tapfaa2f899-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Oct 09 16:41:01 compute-0 kernel: tapfaa2f899-e0: entered promiscuous mode
Oct 09 16:41:01 compute-0 nova_compute[117331]: 2025-10-09 16:41:01.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:01.135 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfaa2f899-e0, col_values=(('external_ids', {'iface-id': '71d0467d-0f77-43e1-a20d-172a49f9f770'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:41:01 compute-0 nova_compute[117331]: 2025-10-09 16:41:01.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:01 compute-0 ovn_controller[19752]: 2025-10-09T16:41:01Z|00284|binding|INFO|Releasing lport 71d0467d-0f77-43e1-a20d-172a49f9f770 from this chassis (sb_readonly=0)
Oct 09 16:41:01 compute-0 nova_compute[117331]: 2025-10-09 16:41:01.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:01.152 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[03b3d51d-267a-4432-bbc4-4abec60dbf48]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:01.153 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:01.153 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:01.153 28613 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for faa2f899-e3f1-48c6-ac37-859a6fb5c6cc disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:01.153 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:01.154 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a837ad64-a0a1-411d-aaae-8bc4687721d3]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:01.154 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:01.155 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[059dfbcc-0885-48e1-ad04-b48f31ab4423]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:01.155 28613 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: global
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     log         /dev/log local0 debug
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     log-tag     haproxy-metadata-proxy-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     user        root
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     group       root
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     maxconn     1024
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     pidfile     /var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     daemon
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: defaults
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     log global
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     mode http
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     option httplog
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     option dontlognull
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     option http-server-close
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     option forwardfor
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     retries                 3
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     timeout http-request    30s
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     timeout connect         30s
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     timeout client          32s
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     timeout server          32s
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     timeout http-keep-alive 30s
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: listen listener
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     bind 169.254.169.254:80
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:     http-request add-header X-OVN-Network-ID faa2f899-e3f1-48c6-ac37-859a6fb5c6cc
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 09 16:41:01 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:01.156 28613 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'env', 'PROCESS_TAG=haproxy-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 09 16:41:01 compute-0 openstack_network_exporter[129925]: ERROR   16:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:41:01 compute-0 openstack_network_exporter[129925]: ERROR   16:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:41:01 compute-0 openstack_network_exporter[129925]: ERROR   16:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:41:01 compute-0 openstack_network_exporter[129925]: ERROR   16:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:41:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:41:01 compute-0 openstack_network_exporter[129925]: ERROR   16:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:41:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:41:01 compute-0 nova_compute[117331]: 2025-10-09 16:41:01.533 2 DEBUG nova.compute.manager [req-5e9910c8-c217-4fda-8305-918f8d8c9ba4 req-6cc29839-6a0e-4ed4-ac16-e5473627f4e4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Received event network-vif-plugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:41:01 compute-0 nova_compute[117331]: 2025-10-09 16:41:01.534 2 DEBUG oslo_concurrency.lockutils [req-5e9910c8-c217-4fda-8305-918f8d8c9ba4 req-6cc29839-6a0e-4ed4-ac16-e5473627f4e4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:41:01 compute-0 nova_compute[117331]: 2025-10-09 16:41:01.534 2 DEBUG oslo_concurrency.lockutils [req-5e9910c8-c217-4fda-8305-918f8d8c9ba4 req-6cc29839-6a0e-4ed4-ac16-e5473627f4e4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:41:01 compute-0 nova_compute[117331]: 2025-10-09 16:41:01.534 2 DEBUG oslo_concurrency.lockutils [req-5e9910c8-c217-4fda-8305-918f8d8c9ba4 req-6cc29839-6a0e-4ed4-ac16-e5473627f4e4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:41:01 compute-0 nova_compute[117331]: 2025-10-09 16:41:01.534 2 DEBUG nova.compute.manager [req-5e9910c8-c217-4fda-8305-918f8d8c9ba4 req-6cc29839-6a0e-4ed4-ac16-e5473627f4e4 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Processing event network-vif-plugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:41:01 compute-0 podman[154090]: 2025-10-09 16:41:01.58067735 +0000 UTC m=+0.074013882 container create 8ec9ceba5cfec52d217c889396f91125c1b36ab6779db780d20ae539f500a89a (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2)
Oct 09 16:41:01 compute-0 systemd[1]: Started libpod-conmon-8ec9ceba5cfec52d217c889396f91125c1b36ab6779db780d20ae539f500a89a.scope.
Oct 09 16:41:01 compute-0 podman[154090]: 2025-10-09 16:41:01.548849189 +0000 UTC m=+0.042185731 image pull 4e15c189a95c5dcd1203c76fe304de5c8f8616da9a258ca168cc54a517f49b87 38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 09 16:41:01 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:41:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f156f522bd7cba66daf2ab5fa62f41a0e6dc1d093eddc3953bdf18206bc4790/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 16:41:01 compute-0 podman[154090]: 2025-10-09 16:41:01.686354078 +0000 UTC m=+0.179690630 container init 8ec9ceba5cfec52d217c889396f91125c1b36ab6779db780d20ae539f500a89a (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:41:01 compute-0 podman[154090]: 2025-10-09 16:41:01.691821292 +0000 UTC m=+0.185157834 container start 8ec9ceba5cfec52d217c889396f91125c1b36ab6779db780d20ae539f500a89a (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, tcib_managed=true, io.buildah.version=1.41.4)
Oct 09 16:41:01 compute-0 neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc[154105]: [NOTICE]   (154109) : New worker (154111) forked
Oct 09 16:41:01 compute-0 neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc[154105]: [NOTICE]   (154109) : Loading success.
Oct 09 16:41:01 compute-0 nova_compute[117331]: 2025-10-09 16:41:01.823 2 DEBUG nova.compute.manager [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:41:01 compute-0 nova_compute[117331]: 2025-10-09 16:41:01.827 2 DEBUG nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:41:01 compute-0 nova_compute[117331]: 2025-10-09 16:41:01.831 2 INFO nova.virt.libvirt.driver [-] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Instance spawned successfully.
Oct 09 16:41:01 compute-0 nova_compute[117331]: 2025-10-09 16:41:01.832 2 DEBUG nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:41:02 compute-0 nova_compute[117331]: 2025-10-09 16:41:02.347 2 DEBUG nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:41:02 compute-0 nova_compute[117331]: 2025-10-09 16:41:02.348 2 DEBUG nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:41:02 compute-0 nova_compute[117331]: 2025-10-09 16:41:02.348 2 DEBUG nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:41:02 compute-0 nova_compute[117331]: 2025-10-09 16:41:02.349 2 DEBUG nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:41:02 compute-0 nova_compute[117331]: 2025-10-09 16:41:02.350 2 DEBUG nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:41:02 compute-0 nova_compute[117331]: 2025-10-09 16:41:02.350 2 DEBUG nova.virt.libvirt.driver [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:41:02 compute-0 nova_compute[117331]: 2025-10-09 16:41:02.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:02 compute-0 nova_compute[117331]: 2025-10-09 16:41:02.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:02 compute-0 nova_compute[117331]: 2025-10-09 16:41:02.860 2 INFO nova.compute.manager [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Took 10.26 seconds to spawn the instance on the hypervisor.
Oct 09 16:41:02 compute-0 nova_compute[117331]: 2025-10-09 16:41:02.861 2 DEBUG nova.compute.manager [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:41:03 compute-0 nova_compute[117331]: 2025-10-09 16:41:03.402 2 INFO nova.compute.manager [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Took 15.54 seconds to build instance.
Oct 09 16:41:03 compute-0 nova_compute[117331]: 2025-10-09 16:41:03.593 2 DEBUG nova.compute.manager [req-b1bb45de-e986-4375-a624-ec74d15aef47 req-3ea81e8a-a2e7-4f12-ab63-710c0bcf34fc ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Received event network-vif-plugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:41:03 compute-0 nova_compute[117331]: 2025-10-09 16:41:03.594 2 DEBUG oslo_concurrency.lockutils [req-b1bb45de-e986-4375-a624-ec74d15aef47 req-3ea81e8a-a2e7-4f12-ab63-710c0bcf34fc ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:41:03 compute-0 nova_compute[117331]: 2025-10-09 16:41:03.595 2 DEBUG oslo_concurrency.lockutils [req-b1bb45de-e986-4375-a624-ec74d15aef47 req-3ea81e8a-a2e7-4f12-ab63-710c0bcf34fc ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:41:03 compute-0 nova_compute[117331]: 2025-10-09 16:41:03.595 2 DEBUG oslo_concurrency.lockutils [req-b1bb45de-e986-4375-a624-ec74d15aef47 req-3ea81e8a-a2e7-4f12-ab63-710c0bcf34fc ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:41:03 compute-0 nova_compute[117331]: 2025-10-09 16:41:03.596 2 DEBUG nova.compute.manager [req-b1bb45de-e986-4375-a624-ec74d15aef47 req-3ea81e8a-a2e7-4f12-ab63-710c0bcf34fc ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] No waiting events found dispatching network-vif-plugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:41:03 compute-0 nova_compute[117331]: 2025-10-09 16:41:03.596 2 WARNING nova.compute.manager [req-b1bb45de-e986-4375-a624-ec74d15aef47 req-3ea81e8a-a2e7-4f12-ab63-710c0bcf34fc ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Received unexpected event network-vif-plugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 for instance with vm_state active and task_state None.
Oct 09 16:41:03 compute-0 nova_compute[117331]: 2025-10-09 16:41:03.909 2 DEBUG oslo_concurrency.lockutils [None req-0dc1d776-a9ae-4854-9727-02ae661565e0 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.060s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:41:04 compute-0 podman[154120]: 2025-10-09 16:41:04.850753033 +0000 UTC m=+0.077810624 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 09 16:41:07 compute-0 nova_compute[117331]: 2025-10-09 16:41:07.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:07 compute-0 nova_compute[117331]: 2025-10-09 16:41:07.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:09 compute-0 podman[154142]: 2025-10-09 16:41:09.827329682 +0000 UTC m=+0.052316633 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:41:11 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:11.696 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:41:11 compute-0 nova_compute[117331]: 2025-10-09 16:41:11.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:11 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:11.698 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:41:12 compute-0 nova_compute[117331]: 2025-10-09 16:41:12.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:12 compute-0 nova_compute[117331]: 2025-10-09 16:41:12.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:14 compute-0 ovn_controller[19752]: 2025-10-09T16:41:14Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:11:4c:7f 10.100.0.3
Oct 09 16:41:14 compute-0 ovn_controller[19752]: 2025-10-09T16:41:14Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:4c:7f 10.100.0.3
Oct 09 16:41:14 compute-0 podman[154185]: 2025-10-09 16:41:14.859441437 +0000 UTC m=+0.081396278 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 09 16:41:14 compute-0 podman[154186]: 2025-10-09 16:41:14.87935049 +0000 UTC m=+0.098878054 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 09 16:41:17 compute-0 nova_compute[117331]: 2025-10-09 16:41:17.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:17 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:17.701 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:41:17 compute-0 nova_compute[117331]: 2025-10-09 16:41:17.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:22 compute-0 nova_compute[117331]: 2025-10-09 16:41:22.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:22 compute-0 nova_compute[117331]: 2025-10-09 16:41:22.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:24 compute-0 podman[154224]: 2025-10-09 16:41:24.856706776 +0000 UTC m=+0.083914568 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git)
Oct 09 16:41:24 compute-0 podman[154225]: 2025-10-09 16:41:24.876497135 +0000 UTC m=+0.104311876 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_controller)
Oct 09 16:41:27 compute-0 nova_compute[117331]: 2025-10-09 16:41:27.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:27 compute-0 nova_compute[117331]: 2025-10-09 16:41:27.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:29 compute-0 podman[127775]: time="2025-10-09T16:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:41:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:41:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3487 "" "Go-http-client/1.1"
Oct 09 16:41:31 compute-0 openstack_network_exporter[129925]: ERROR   16:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:41:31 compute-0 openstack_network_exporter[129925]: ERROR   16:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:41:31 compute-0 openstack_network_exporter[129925]: ERROR   16:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:41:31 compute-0 openstack_network_exporter[129925]: ERROR   16:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:41:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:41:31 compute-0 openstack_network_exporter[129925]: ERROR   16:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:41:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:41:32 compute-0 nova_compute[117331]: 2025-10-09 16:41:32.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:41:32 compute-0 nova_compute[117331]: 2025-10-09 16:41:32.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:32 compute-0 nova_compute[117331]: 2025-10-09 16:41:32.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:34 compute-0 nova_compute[117331]: 2025-10-09 16:41:34.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:41:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:35.341 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:41:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:35.342 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:41:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:35.342 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:41:35 compute-0 podman[154271]: 2025-10-09 16:41:35.848357657 +0000 UTC m=+0.075382156 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, io.buildah.version=1.41.4)
Oct 09 16:41:37 compute-0 nova_compute[117331]: 2025-10-09 16:41:37.225 2 DEBUG nova.virt.libvirt.driver [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Check if temp file /var/lib/nova/instances/tmpfkqomhp4 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Oct 09 16:41:37 compute-0 nova_compute[117331]: 2025-10-09 16:41:37.232 2 DEBUG nova.compute.manager [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpfkqomhp4',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='e49a1e89-0c38-4887-9bdf-64f07982ab22',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Oct 09 16:41:37 compute-0 nova_compute[117331]: 2025-10-09 16:41:37.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:37 compute-0 nova_compute[117331]: 2025-10-09 16:41:37.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:39 compute-0 nova_compute[117331]: 2025-10-09 16:41:39.302 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:41:39 compute-0 nova_compute[117331]: 2025-10-09 16:41:39.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:41:39 compute-0 nova_compute[117331]: 2025-10-09 16:41:39.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:41:39 compute-0 sshd-session[153931]: Invalid user support from 124.60.67.43 port 45482
Oct 09 16:41:39 compute-0 nova_compute[117331]: 2025-10-09 16:41:39.822 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:41:39 compute-0 nova_compute[117331]: 2025-10-09 16:41:39.823 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:41:39 compute-0 nova_compute[117331]: 2025-10-09 16:41:39.823 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:41:39 compute-0 nova_compute[117331]: 2025-10-09 16:41:39.823 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:41:39 compute-0 podman[154293]: 2025-10-09 16:41:39.917313905 +0000 UTC m=+0.048888804 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:41:40 compute-0 nova_compute[117331]: 2025-10-09 16:41:40.865 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:41:40 compute-0 nova_compute[117331]: 2025-10-09 16:41:40.927 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:41:40 compute-0 nova_compute[117331]: 2025-10-09 16:41:40.928 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:41:40 compute-0 nova_compute[117331]: 2025-10-09 16:41:40.984 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:41:41 compute-0 nova_compute[117331]: 2025-10-09 16:41:41.149 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:41:41 compute-0 nova_compute[117331]: 2025-10-09 16:41:41.150 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:41:41 compute-0 nova_compute[117331]: 2025-10-09 16:41:41.171 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.020s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:41:41 compute-0 nova_compute[117331]: 2025-10-09 16:41:41.172 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5982MB free_disk=73.22043228149414GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:41:41 compute-0 nova_compute[117331]: 2025-10-09 16:41:41.172 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:41:41 compute-0 nova_compute[117331]: 2025-10-09 16:41:41.173 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:41:41 compute-0 nova_compute[117331]: 2025-10-09 16:41:41.457 2 DEBUG oslo_concurrency.processutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:41:41 compute-0 nova_compute[117331]: 2025-10-09 16:41:41.528 2 DEBUG oslo_concurrency.processutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:41:41 compute-0 nova_compute[117331]: 2025-10-09 16:41:41.529 2 DEBUG oslo_concurrency.processutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:41:41 compute-0 nova_compute[117331]: 2025-10-09 16:41:41.587 2 DEBUG oslo_concurrency.processutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:41:41 compute-0 nova_compute[117331]: 2025-10-09 16:41:41.588 2 DEBUG nova.compute.manager [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Preparing to wait for external event network-vif-plugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:41:41 compute-0 nova_compute[117331]: 2025-10-09 16:41:41.589 2 DEBUG oslo_concurrency.lockutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:41:41 compute-0 nova_compute[117331]: 2025-10-09 16:41:41.589 2 DEBUG oslo_concurrency.lockutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:41:41 compute-0 nova_compute[117331]: 2025-10-09 16:41:41.589 2 DEBUG oslo_concurrency.lockutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:41:42 compute-0 nova_compute[117331]: 2025-10-09 16:41:42.206 2 INFO nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Updating resource usage from migration 789c03a1-9329-4a25-b1ed-48eab8eb714b
Oct 09 16:41:42 compute-0 nova_compute[117331]: 2025-10-09 16:41:42.231 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Migration 789c03a1-9329-4a25-b1ed-48eab8eb714b is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:41:42 compute-0 nova_compute[117331]: 2025-10-09 16:41:42.231 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:41:42 compute-0 nova_compute[117331]: 2025-10-09 16:41:42.232 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:41:41 up 50 min,  0 user,  load average: 0.57, 0.49, 0.46\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_migrating': '1', 'num_os_type_None': '1', 'num_proj_fca25ca2f317463c909e30c8b2c188d1': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:41:42 compute-0 nova_compute[117331]: 2025-10-09 16:41:42.247 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing inventories for resource provider 593051b8-2000-437f-a915-2616fc8b1671 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 09 16:41:42 compute-0 nova_compute[117331]: 2025-10-09 16:41:42.260 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating ProviderTree inventory for provider 593051b8-2000-437f-a915-2616fc8b1671 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 09 16:41:42 compute-0 nova_compute[117331]: 2025-10-09 16:41:42.261 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating inventory in ProviderTree for provider 593051b8-2000-437f-a915-2616fc8b1671 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 09 16:41:42 compute-0 nova_compute[117331]: 2025-10-09 16:41:42.273 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing aggregate associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 09 16:41:42 compute-0 nova_compute[117331]: 2025-10-09 16:41:42.290 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing trait associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, traits: HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ARCH_X86_64,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_TIS,HW_ARCH_X86_64,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOUND_MODEL_AC97,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_RESCUE_BFV,COMPUTE_SOUND_MODEL_SB16,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 09 16:41:42 compute-0 nova_compute[117331]: 2025-10-09 16:41:42.326 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:41:42 compute-0 nova_compute[117331]: 2025-10-09 16:41:42.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:42 compute-0 nova_compute[117331]: 2025-10-09 16:41:42.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:42 compute-0 nova_compute[117331]: 2025-10-09 16:41:42.832 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:41:43 compute-0 nova_compute[117331]: 2025-10-09 16:41:43.341 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:41:43 compute-0 nova_compute[117331]: 2025-10-09 16:41:43.342 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.169s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:41:44 compute-0 nova_compute[117331]: 2025-10-09 16:41:44.343 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:41:44 compute-0 nova_compute[117331]: 2025-10-09 16:41:44.344 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:41:45 compute-0 nova_compute[117331]: 2025-10-09 16:41:45.308 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:41:45 compute-0 podman[154332]: 2025-10-09 16:41:45.839293735 +0000 UTC m=+0.062094744 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 09 16:41:45 compute-0 podman[154333]: 2025-10-09 16:41:45.855318434 +0000 UTC m=+0.073656561 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251007, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 09 16:41:46 compute-0 nova_compute[117331]: 2025-10-09 16:41:46.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:41:46 compute-0 nova_compute[117331]: 2025-10-09 16:41:46.595 2 DEBUG nova.compute.manager [req-834a68de-9494-40ee-b93c-a37f0546dced req-578b8b5f-a70b-4bc5-b15f-84fbe7d2e477 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Received event network-vif-unplugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:41:46 compute-0 nova_compute[117331]: 2025-10-09 16:41:46.595 2 DEBUG oslo_concurrency.lockutils [req-834a68de-9494-40ee-b93c-a37f0546dced req-578b8b5f-a70b-4bc5-b15f-84fbe7d2e477 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:41:46 compute-0 nova_compute[117331]: 2025-10-09 16:41:46.595 2 DEBUG oslo_concurrency.lockutils [req-834a68de-9494-40ee-b93c-a37f0546dced req-578b8b5f-a70b-4bc5-b15f-84fbe7d2e477 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:41:46 compute-0 nova_compute[117331]: 2025-10-09 16:41:46.596 2 DEBUG oslo_concurrency.lockutils [req-834a68de-9494-40ee-b93c-a37f0546dced req-578b8b5f-a70b-4bc5-b15f-84fbe7d2e477 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:41:46 compute-0 nova_compute[117331]: 2025-10-09 16:41:46.596 2 DEBUG nova.compute.manager [req-834a68de-9494-40ee-b93c-a37f0546dced req-578b8b5f-a70b-4bc5-b15f-84fbe7d2e477 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] No event matching network-vif-unplugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 in dict_keys([('network-vif-plugged', '3bf58985-c8d6-4c25-bed7-5fe8a78a1239')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Oct 09 16:41:46 compute-0 nova_compute[117331]: 2025-10-09 16:41:46.596 2 DEBUG nova.compute.manager [req-834a68de-9494-40ee-b93c-a37f0546dced req-578b8b5f-a70b-4bc5-b15f-84fbe7d2e477 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Received event network-vif-unplugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:41:47 compute-0 nova_compute[117331]: 2025-10-09 16:41:47.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:47 compute-0 nova_compute[117331]: 2025-10-09 16:41:47.611 2 INFO nova.compute.manager [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Took 6.02 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Oct 09 16:41:47 compute-0 nova_compute[117331]: 2025-10-09 16:41:47.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:48 compute-0 nova_compute[117331]: 2025-10-09 16:41:48.663 2 DEBUG nova.compute.manager [req-23a7b619-9076-4a50-8fe6-64eff8a07341 req-71f87633-beda-4aa4-b9b5-cd457cae89f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Received event network-vif-plugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:41:48 compute-0 nova_compute[117331]: 2025-10-09 16:41:48.664 2 DEBUG oslo_concurrency.lockutils [req-23a7b619-9076-4a50-8fe6-64eff8a07341 req-71f87633-beda-4aa4-b9b5-cd457cae89f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:41:48 compute-0 nova_compute[117331]: 2025-10-09 16:41:48.664 2 DEBUG oslo_concurrency.lockutils [req-23a7b619-9076-4a50-8fe6-64eff8a07341 req-71f87633-beda-4aa4-b9b5-cd457cae89f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:41:48 compute-0 nova_compute[117331]: 2025-10-09 16:41:48.664 2 DEBUG oslo_concurrency.lockutils [req-23a7b619-9076-4a50-8fe6-64eff8a07341 req-71f87633-beda-4aa4-b9b5-cd457cae89f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:41:48 compute-0 nova_compute[117331]: 2025-10-09 16:41:48.664 2 DEBUG nova.compute.manager [req-23a7b619-9076-4a50-8fe6-64eff8a07341 req-71f87633-beda-4aa4-b9b5-cd457cae89f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Processing event network-vif-plugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:41:48 compute-0 nova_compute[117331]: 2025-10-09 16:41:48.665 2 DEBUG nova.compute.manager [req-23a7b619-9076-4a50-8fe6-64eff8a07341 req-71f87633-beda-4aa4-b9b5-cd457cae89f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Received event network-changed-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:41:48 compute-0 nova_compute[117331]: 2025-10-09 16:41:48.665 2 DEBUG nova.compute.manager [req-23a7b619-9076-4a50-8fe6-64eff8a07341 req-71f87633-beda-4aa4-b9b5-cd457cae89f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Refreshing instance network info cache due to event network-changed-3bf58985-c8d6-4c25-bed7-5fe8a78a1239. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:41:48 compute-0 nova_compute[117331]: 2025-10-09 16:41:48.665 2 DEBUG oslo_concurrency.lockutils [req-23a7b619-9076-4a50-8fe6-64eff8a07341 req-71f87633-beda-4aa4-b9b5-cd457cae89f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-e49a1e89-0c38-4887-9bdf-64f07982ab22" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:41:48 compute-0 nova_compute[117331]: 2025-10-09 16:41:48.665 2 DEBUG oslo_concurrency.lockutils [req-23a7b619-9076-4a50-8fe6-64eff8a07341 req-71f87633-beda-4aa4-b9b5-cd457cae89f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-e49a1e89-0c38-4887-9bdf-64f07982ab22" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:41:48 compute-0 nova_compute[117331]: 2025-10-09 16:41:48.666 2 DEBUG nova.network.neutron [req-23a7b619-9076-4a50-8fe6-64eff8a07341 req-71f87633-beda-4aa4-b9b5-cd457cae89f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Refreshing network info cache for port 3bf58985-c8d6-4c25-bed7-5fe8a78a1239 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:41:48 compute-0 nova_compute[117331]: 2025-10-09 16:41:48.668 2 DEBUG nova.compute.manager [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:41:49 compute-0 nova_compute[117331]: 2025-10-09 16:41:49.177 2 WARNING neutronclient.v2_0.client [req-23a7b619-9076-4a50-8fe6-64eff8a07341 req-71f87633-beda-4aa4-b9b5-cd457cae89f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:41:49 compute-0 nova_compute[117331]: 2025-10-09 16:41:49.181 2 DEBUG nova.compute.manager [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpfkqomhp4',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='e49a1e89-0c38-4887-9bdf-64f07982ab22',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(789c03a1-9329-4a25-b1ed-48eab8eb714b),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Oct 09 16:41:49 compute-0 nova_compute[117331]: 2025-10-09 16:41:49.695 2 DEBUG nova.objects.instance [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'migration_context' on Instance uuid e49a1e89-0c38-4887-9bdf-64f07982ab22 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:41:49 compute-0 nova_compute[117331]: 2025-10-09 16:41:49.695 2 DEBUG nova.virt.libvirt.driver [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Oct 09 16:41:49 compute-0 nova_compute[117331]: 2025-10-09 16:41:49.696 2 DEBUG nova.virt.libvirt.driver [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:41:49 compute-0 nova_compute[117331]: 2025-10-09 16:41:49.697 2 DEBUG nova.virt.libvirt.driver [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:41:49 compute-0 nova_compute[117331]: 2025-10-09 16:41:49.765 2 WARNING neutronclient.v2_0.client [req-23a7b619-9076-4a50-8fe6-64eff8a07341 req-71f87633-beda-4aa4-b9b5-cd457cae89f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:41:50 compute-0 nova_compute[117331]: 2025-10-09 16:41:50.026 2 DEBUG nova.network.neutron [req-23a7b619-9076-4a50-8fe6-64eff8a07341 req-71f87633-beda-4aa4-b9b5-cd457cae89f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Updated VIF entry in instance network info cache for port 3bf58985-c8d6-4c25-bed7-5fe8a78a1239. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Oct 09 16:41:50 compute-0 nova_compute[117331]: 2025-10-09 16:41:50.026 2 DEBUG nova.network.neutron [req-23a7b619-9076-4a50-8fe6-64eff8a07341 req-71f87633-beda-4aa4-b9b5-cd457cae89f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Updating instance_info_cache with network_info: [{"id": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "address": "fa:16:3e:11:4c:7f", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bf58985-c8", "ovs_interfaceid": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:41:50 compute-0 nova_compute[117331]: 2025-10-09 16:41:50.199 2 DEBUG nova.virt.libvirt.driver [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:41:50 compute-0 nova_compute[117331]: 2025-10-09 16:41:50.200 2 DEBUG nova.virt.libvirt.driver [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:41:50 compute-0 nova_compute[117331]: 2025-10-09 16:41:50.207 2 DEBUG nova.virt.libvirt.vif [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:40:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-484584536',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-484584536',id=30,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:41:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fca25ca2f317463c909e30c8b2c188d1',ramdisk_id='',reservation_id='r-4loi0xvy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-1067041161',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-1067041161-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:41:02Z,user_data=None,user_id='05384d72bc894b20b1cc128bf76382b2',uuid=e49a1e89-0c38-4887-9bdf-64f07982ab22,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "address": "fa:16:3e:11:4c:7f", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap3bf58985-c8", "ovs_interfaceid": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:41:50 compute-0 nova_compute[117331]: 2025-10-09 16:41:50.208 2 DEBUG nova.network.os_vif_util [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "address": "fa:16:3e:11:4c:7f", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap3bf58985-c8", "ovs_interfaceid": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:41:50 compute-0 nova_compute[117331]: 2025-10-09 16:41:50.209 2 DEBUG nova.network.os_vif_util [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:4c:7f,bridge_name='br-int',has_traffic_filtering=True,id=3bf58985-c8d6-4c25-bed7-5fe8a78a1239,network=Network(faa2f899-e3f1-48c6-ac37-859a6fb5c6cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bf58985-c8') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:41:50 compute-0 nova_compute[117331]: 2025-10-09 16:41:50.210 2 DEBUG nova.virt.libvirt.migration [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Updating guest XML with vif config: <interface type="ethernet">
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <mac address="fa:16:3e:11:4c:7f"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <model type="virtio"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <mtu size="1442"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <target dev="tap3bf58985-c8"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]: </interface>
Oct 09 16:41:50 compute-0 nova_compute[117331]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Oct 09 16:41:50 compute-0 nova_compute[117331]: 2025-10-09 16:41:50.211 2 DEBUG nova.virt.libvirt.migration [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <name>instance-0000001e</name>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <uuid>e49a1e89-0c38-4887-9bdf-64f07982ab22</uuid>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteZoneMigrationStrategy-server-484584536</nova:name>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:40:57</nova:creationTime>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:41:50 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:41:50 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:user uuid="05384d72bc894b20b1cc128bf76382b2">tempest-TestExecuteZoneMigrationStrategy-1067041161-project-admin</nova:user>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:project uuid="fca25ca2f317463c909e30c8b2c188d1">tempest-TestExecuteZoneMigrationStrategy-1067041161</nova:project>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:port uuid="3bf58985-c8d6-4c25-bed7-5fe8a78a1239">
Oct 09 16:41:50 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <system>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <entry name="serial">e49a1e89-0c38-4887-9bdf-64f07982ab22</entry>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <entry name="uuid">e49a1e89-0c38-4887-9bdf-64f07982ab22</entry>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </system>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <os>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </os>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <features>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </features>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk.config"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:11:4c:7f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3bf58985-c8"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/console.log" append="off"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       </target>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/console.log" append="off"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </console>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </input>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <video>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </video>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]: </domain>
Oct 09 16:41:50 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Oct 09 16:41:50 compute-0 nova_compute[117331]: 2025-10-09 16:41:50.213 2 DEBUG nova.virt.libvirt.migration [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <name>instance-0000001e</name>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <uuid>e49a1e89-0c38-4887-9bdf-64f07982ab22</uuid>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteZoneMigrationStrategy-server-484584536</nova:name>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:40:57</nova:creationTime>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:41:50 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:41:50 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:user uuid="05384d72bc894b20b1cc128bf76382b2">tempest-TestExecuteZoneMigrationStrategy-1067041161-project-admin</nova:user>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:project uuid="fca25ca2f317463c909e30c8b2c188d1">tempest-TestExecuteZoneMigrationStrategy-1067041161</nova:project>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:port uuid="3bf58985-c8d6-4c25-bed7-5fe8a78a1239">
Oct 09 16:41:50 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <system>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <entry name="serial">e49a1e89-0c38-4887-9bdf-64f07982ab22</entry>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <entry name="uuid">e49a1e89-0c38-4887-9bdf-64f07982ab22</entry>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </system>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <os>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </os>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <features>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </features>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk.config"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:11:4c:7f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3bf58985-c8"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/console.log" append="off"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       </target>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/console.log" append="off"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </console>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </input>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <video>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </video>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]: </domain>
Oct 09 16:41:50 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Oct 09 16:41:50 compute-0 nova_compute[117331]: 2025-10-09 16:41:50.214 2 DEBUG nova.virt.libvirt.migration [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _update_pci_xml output xml=<domain type="kvm">
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <name>instance-0000001e</name>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <uuid>e49a1e89-0c38-4887-9bdf-64f07982ab22</uuid>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteZoneMigrationStrategy-server-484584536</nova:name>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:40:57</nova:creationTime>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:41:50 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:41:50 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:user uuid="05384d72bc894b20b1cc128bf76382b2">tempest-TestExecuteZoneMigrationStrategy-1067041161-project-admin</nova:user>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:project uuid="fca25ca2f317463c909e30c8b2c188d1">tempest-TestExecuteZoneMigrationStrategy-1067041161</nova:project>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <nova:port uuid="3bf58985-c8d6-4c25-bed7-5fe8a78a1239">
Oct 09 16:41:50 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <system>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <entry name="serial">e49a1e89-0c38-4887-9bdf-64f07982ab22</entry>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <entry name="uuid">e49a1e89-0c38-4887-9bdf-64f07982ab22</entry>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </system>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <os>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </os>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <features>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </features>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/disk.config"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:11:4c:7f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3bf58985-c8"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/console.log" append="off"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:41:50 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       </target>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22/console.log" append="off"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </console>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </input>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <video>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </video>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:41:50 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:41:50 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:41:50 compute-0 nova_compute[117331]: </domain>
Oct 09 16:41:50 compute-0 nova_compute[117331]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Oct 09 16:41:50 compute-0 nova_compute[117331]: 2025-10-09 16:41:50.214 2 DEBUG nova.virt.libvirt.driver [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Oct 09 16:41:50 compute-0 nova_compute[117331]: 2025-10-09 16:41:50.531 2 DEBUG oslo_concurrency.lockutils [req-23a7b619-9076-4a50-8fe6-64eff8a07341 req-71f87633-beda-4aa4-b9b5-cd457cae89f6 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-e49a1e89-0c38-4887-9bdf-64f07982ab22" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:41:50 compute-0 nova_compute[117331]: 2025-10-09 16:41:50.703 2 DEBUG nova.virt.libvirt.migration [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Current None elapsed 1 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:41:50 compute-0 nova_compute[117331]: 2025-10-09 16:41:50.703 2 INFO nova.virt.libvirt.migration [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Increasing downtime to 50 ms after 0 sec elapsed time
Oct 09 16:41:50 compute-0 ovn_controller[19752]: 2025-10-09T16:41:50Z|00285|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct 09 16:41:51 compute-0 nova_compute[117331]: 2025-10-09 16:41:51.725 2 INFO nova.virt.libvirt.driver [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.229 2 DEBUG nova.virt.libvirt.migration [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Current 50 elapsed 2 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.229 2 DEBUG nova.virt.libvirt.migration [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Oct 09 16:41:52 compute-0 kernel: tap3bf58985-c8 (unregistering): left promiscuous mode
Oct 09 16:41:52 compute-0 NetworkManager[1028]: <info>  [1760028112.4078] device (tap3bf58985-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:52 compute-0 ovn_controller[19752]: 2025-10-09T16:41:52Z|00286|binding|INFO|Releasing lport 3bf58985-c8d6-4c25-bed7-5fe8a78a1239 from this chassis (sb_readonly=0)
Oct 09 16:41:52 compute-0 ovn_controller[19752]: 2025-10-09T16:41:52Z|00287|binding|INFO|Setting lport 3bf58985-c8d6-4c25-bed7-5fe8a78a1239 down in Southbound
Oct 09 16:41:52 compute-0 ovn_controller[19752]: 2025-10-09T16:41:52Z|00288|binding|INFO|Removing iface tap3bf58985-c8 ovn-installed in OVS
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:52.423 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:4c:7f 10.100.0.3'], port_security=['fa:16:3e:11:4c:7f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '2bd8bf21-1f6b-42c9-9656-9a72fa8dcbf3'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e49a1e89-0c38-4887-9bdf-64f07982ab22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fca25ca2f317463c909e30c8b2c188d1', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'a3264ad7-df47-48af-b997-2022c89ca53a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a2d5ec5-dae5-4df8-b843-efe172a6e533, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=3bf58985-c8d6-4c25-bed7-5fe8a78a1239) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:41:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:52.424 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 3bf58985-c8d6-4c25-bed7-5fe8a78a1239 in datapath faa2f899-e3f1-48c6-ac37-859a6fb5c6cc unbound from our chassis
Oct 09 16:41:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:52.426 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network faa2f899-e3f1-48c6-ac37-859a6fb5c6cc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:41:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:52.428 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[cf6ed388-8330-44cf-8634-2b91dc5eb0a5]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:52.429 28613 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc namespace which is not needed anymore
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:52 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Oct 09 16:41:52 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000001e.scope: Consumed 14.809s CPU time.
Oct 09 16:41:52 compute-0 systemd-machined[77487]: Machine qemu-24-instance-0000001e terminated.
Oct 09 16:41:52 compute-0 neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc[154105]: [NOTICE]   (154109) : haproxy version is 3.0.5-8e879a5
Oct 09 16:41:52 compute-0 neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc[154105]: [NOTICE]   (154109) : path to executable is /usr/sbin/haproxy
Oct 09 16:41:52 compute-0 neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc[154105]: [WARNING]  (154109) : Exiting Master process...
Oct 09 16:41:52 compute-0 podman[154399]: 2025-10-09 16:41:52.534760636 +0000 UTC m=+0.025401837 container kill 8ec9ceba5cfec52d217c889396f91125c1b36ab6779db780d20ae539f500a89a (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4)
Oct 09 16:41:52 compute-0 neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc[154105]: [ALERT]    (154109) : Current worker (154111) exited with code 143 (Terminated)
Oct 09 16:41:52 compute-0 neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc[154105]: [WARNING]  (154109) : All workers exited. Exiting... (0)
Oct 09 16:41:52 compute-0 systemd[1]: libpod-8ec9ceba5cfec52d217c889396f91125c1b36ab6779db780d20ae539f500a89a.scope: Deactivated successfully.
Oct 09 16:41:52 compute-0 podman[154415]: 2025-10-09 16:41:52.578740395 +0000 UTC m=+0.024678576 container died 8ec9ceba5cfec52d217c889396f91125c1b36ab6779db780d20ae539f500a89a (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ec9ceba5cfec52d217c889396f91125c1b36ab6779db780d20ae539f500a89a-userdata-shm.mount: Deactivated successfully.
Oct 09 16:41:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f156f522bd7cba66daf2ab5fa62f41a0e6dc1d093eddc3953bdf18206bc4790-merged.mount: Deactivated successfully.
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:52 compute-0 podman[154415]: 2025-10-09 16:41:52.620281174 +0000 UTC m=+0.066219315 container cleanup 8ec9ceba5cfec52d217c889396f91125c1b36ab6779db780d20ae539f500a89a (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 09 16:41:52 compute-0 systemd[1]: libpod-conmon-8ec9ceba5cfec52d217c889396f91125c1b36ab6779db780d20ae539f500a89a.scope: Deactivated successfully.
Oct 09 16:41:52 compute-0 podman[154416]: 2025-10-09 16:41:52.638276577 +0000 UTC m=+0.080922054 container remove 8ec9ceba5cfec52d217c889396f91125c1b36ab6779db780d20ae539f500a89a (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, tcib_managed=true, io.buildah.version=1.41.4)
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.646 2 DEBUG nova.virt.libvirt.driver [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.646 2 DEBUG nova.virt.libvirt.driver [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.646 2 DEBUG nova.virt.libvirt.driver [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Oct 09 16:41:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:52.646 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d34cb796-3f46-43bf-8a33-58c26c110c82]: (4, ("Thu Oct  9 04:41:52 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc (8ec9ceba5cfec52d217c889396f91125c1b36ab6779db780d20ae539f500a89a)\n8ec9ceba5cfec52d217c889396f91125c1b36ab6779db780d20ae539f500a89a\nThu Oct  9 04:41:52 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc (8ec9ceba5cfec52d217c889396f91125c1b36ab6779db780d20ae539f500a89a)\n8ec9ceba5cfec52d217c889396f91125c1b36ab6779db780d20ae539f500a89a\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:52.647 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[83a55493-09b8-474a-9136-02cbb644fae2]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:52.648 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:41:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:52.648 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[7bf25b6c-92b3-4872-9d93-73288638277b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:52.648 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaa2f899-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:52 compute-0 kernel: tapfaa2f899-e0: left promiscuous mode
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:52.666 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[fee33690-76f4-4a94-ae55-71927cc4870a]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:52.701 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d09ac4c8-9abf-425e-8b68-8a5fc128e24a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:52.702 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[4312c0b0-4bb3-4afa-8b5a-d55909801ae6]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:52.722 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[092369c1-4e05-469e-80d5-63cf3fa38464]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 300447, 'reachable_time': 15877, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 154466, 'error': None, 'target': 'ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:52.724 28727 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 09 16:41:52 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:52.725 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[1c323799-d7fe-417d-95f8-fd3c0c9c10ea]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:41:52 compute-0 systemd[1]: run-netns-ovnmeta\x2dfaa2f899\x2de3f1\x2d48c6\x2dac37\x2d859a6fb5c6cc.mount: Deactivated successfully.
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.731 2 DEBUG nova.virt.libvirt.guest [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid 'e49a1e89-0c38-4887-9bdf-64f07982ab22' (instance-0000001e) get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.732 2 INFO nova.virt.libvirt.driver [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Migration operation has completed
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.732 2 INFO nova.compute.manager [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] _post_live_migration() is started..
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.747 2 WARNING neutronclient.v2_0.client [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.748 2 WARNING neutronclient.v2_0.client [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.963 2 DEBUG nova.compute.manager [req-94a97492-c3dd-4b55-910b-31698357684a req-a255c73c-a5a5-4b70-908c-ec0b91594917 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Received event network-vif-unplugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.964 2 DEBUG oslo_concurrency.lockutils [req-94a97492-c3dd-4b55-910b-31698357684a req-a255c73c-a5a5-4b70-908c-ec0b91594917 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.964 2 DEBUG oslo_concurrency.lockutils [req-94a97492-c3dd-4b55-910b-31698357684a req-a255c73c-a5a5-4b70-908c-ec0b91594917 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.968 2 DEBUG oslo_concurrency.lockutils [req-94a97492-c3dd-4b55-910b-31698357684a req-a255c73c-a5a5-4b70-908c-ec0b91594917 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.004s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.970 2 DEBUG nova.compute.manager [req-94a97492-c3dd-4b55-910b-31698357684a req-a255c73c-a5a5-4b70-908c-ec0b91594917 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] No waiting events found dispatching network-vif-unplugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:41:52 compute-0 nova_compute[117331]: 2025-10-09 16:41:52.971 2 DEBUG nova.compute.manager [req-94a97492-c3dd-4b55-910b-31698357684a req-a255c73c-a5a5-4b70-908c-ec0b91594917 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Received event network-vif-unplugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:41:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:53.146 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:41:53 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:41:53.147 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.339 2 DEBUG nova.network.neutron [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Activated binding for port 3bf58985-c8d6-4c25-bed7-5fe8a78a1239 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.340 2 DEBUG nova.compute.manager [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "address": "fa:16:3e:11:4c:7f", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bf58985-c8", "ovs_interfaceid": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.342 2 DEBUG nova.virt.libvirt.vif [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:40:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-484584536',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-484584536',id=30,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:41:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fca25ca2f317463c909e30c8b2c188d1',ramdisk_id='',reservation_id='r-4loi0xvy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-1067041161',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-1067041161-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:41:31Z,user_data=None,user_id='05384d72bc894b20b1cc128bf76382b2',uuid=e49a1e89-0c38-4887-9bdf-64f07982ab22,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "address": "fa:16:3e:11:4c:7f", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bf58985-c8", "ovs_interfaceid": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.342 2 DEBUG nova.network.os_vif_util [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "address": "fa:16:3e:11:4c:7f", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bf58985-c8", "ovs_interfaceid": "3bf58985-c8d6-4c25-bed7-5fe8a78a1239", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.343 2 DEBUG nova.network.os_vif_util [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:4c:7f,bridge_name='br-int',has_traffic_filtering=True,id=3bf58985-c8d6-4c25-bed7-5fe8a78a1239,network=Network(faa2f899-e3f1-48c6-ac37-859a6fb5c6cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bf58985-c8') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.344 2 DEBUG os_vif [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:4c:7f,bridge_name='br-int',has_traffic_filtering=True,id=3bf58985-c8d6-4c25-bed7-5fe8a78a1239,network=Network(faa2f899-e3f1-48c6-ac37-859a6fb5c6cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bf58985-c8') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.349 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bf58985-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.398 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=2506c796-8703-4fd0-abc3-37633954b715) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.403 2 INFO os_vif [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:4c:7f,bridge_name='br-int',has_traffic_filtering=True,id=3bf58985-c8d6-4c25-bed7-5fe8a78a1239,network=Network(faa2f899-e3f1-48c6-ac37-859a6fb5c6cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bf58985-c8')
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.404 2 DEBUG oslo_concurrency.lockutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.404 2 DEBUG oslo_concurrency.lockutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.404 2 DEBUG oslo_concurrency.lockutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.405 2 DEBUG nova.compute.manager [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.405 2 INFO nova.virt.libvirt.driver [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Deleting instance files /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22_del
Oct 09 16:41:53 compute-0 nova_compute[117331]: 2025-10-09 16:41:53.406 2 INFO nova.virt.libvirt.driver [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Deletion of /var/lib/nova/instances/e49a1e89-0c38-4887-9bdf-64f07982ab22_del complete
Oct 09 16:41:55 compute-0 nova_compute[117331]: 2025-10-09 16:41:55.033 2 DEBUG nova.compute.manager [req-70167012-7aea-4cae-bfc8-509fd923dceb req-79efc67a-5a35-43e9-9326-473208f30110 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Received event network-vif-plugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:41:55 compute-0 nova_compute[117331]: 2025-10-09 16:41:55.034 2 DEBUG oslo_concurrency.lockutils [req-70167012-7aea-4cae-bfc8-509fd923dceb req-79efc67a-5a35-43e9-9326-473208f30110 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:41:55 compute-0 nova_compute[117331]: 2025-10-09 16:41:55.035 2 DEBUG oslo_concurrency.lockutils [req-70167012-7aea-4cae-bfc8-509fd923dceb req-79efc67a-5a35-43e9-9326-473208f30110 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:41:55 compute-0 nova_compute[117331]: 2025-10-09 16:41:55.035 2 DEBUG oslo_concurrency.lockutils [req-70167012-7aea-4cae-bfc8-509fd923dceb req-79efc67a-5a35-43e9-9326-473208f30110 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:41:55 compute-0 nova_compute[117331]: 2025-10-09 16:41:55.035 2 DEBUG nova.compute.manager [req-70167012-7aea-4cae-bfc8-509fd923dceb req-79efc67a-5a35-43e9-9326-473208f30110 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] No waiting events found dispatching network-vif-plugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:41:55 compute-0 nova_compute[117331]: 2025-10-09 16:41:55.036 2 WARNING nova.compute.manager [req-70167012-7aea-4cae-bfc8-509fd923dceb req-79efc67a-5a35-43e9-9326-473208f30110 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Received unexpected event network-vif-plugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 for instance with vm_state active and task_state migrating.
Oct 09 16:41:55 compute-0 nova_compute[117331]: 2025-10-09 16:41:55.036 2 DEBUG nova.compute.manager [req-70167012-7aea-4cae-bfc8-509fd923dceb req-79efc67a-5a35-43e9-9326-473208f30110 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Received event network-vif-unplugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:41:55 compute-0 nova_compute[117331]: 2025-10-09 16:41:55.036 2 DEBUG oslo_concurrency.lockutils [req-70167012-7aea-4cae-bfc8-509fd923dceb req-79efc67a-5a35-43e9-9326-473208f30110 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:41:55 compute-0 nova_compute[117331]: 2025-10-09 16:41:55.037 2 DEBUG oslo_concurrency.lockutils [req-70167012-7aea-4cae-bfc8-509fd923dceb req-79efc67a-5a35-43e9-9326-473208f30110 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:41:55 compute-0 nova_compute[117331]: 2025-10-09 16:41:55.037 2 DEBUG oslo_concurrency.lockutils [req-70167012-7aea-4cae-bfc8-509fd923dceb req-79efc67a-5a35-43e9-9326-473208f30110 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:41:55 compute-0 nova_compute[117331]: 2025-10-09 16:41:55.037 2 DEBUG nova.compute.manager [req-70167012-7aea-4cae-bfc8-509fd923dceb req-79efc67a-5a35-43e9-9326-473208f30110 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] No waiting events found dispatching network-vif-unplugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:41:55 compute-0 nova_compute[117331]: 2025-10-09 16:41:55.038 2 DEBUG nova.compute.manager [req-70167012-7aea-4cae-bfc8-509fd923dceb req-79efc67a-5a35-43e9-9326-473208f30110 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Received event network-vif-unplugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:41:55 compute-0 nova_compute[117331]: 2025-10-09 16:41:55.038 2 DEBUG nova.compute.manager [req-70167012-7aea-4cae-bfc8-509fd923dceb req-79efc67a-5a35-43e9-9326-473208f30110 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Received event network-vif-plugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:41:55 compute-0 nova_compute[117331]: 2025-10-09 16:41:55.038 2 DEBUG oslo_concurrency.lockutils [req-70167012-7aea-4cae-bfc8-509fd923dceb req-79efc67a-5a35-43e9-9326-473208f30110 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:41:55 compute-0 nova_compute[117331]: 2025-10-09 16:41:55.039 2 DEBUG oslo_concurrency.lockutils [req-70167012-7aea-4cae-bfc8-509fd923dceb req-79efc67a-5a35-43e9-9326-473208f30110 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:41:55 compute-0 nova_compute[117331]: 2025-10-09 16:41:55.039 2 DEBUG oslo_concurrency.lockutils [req-70167012-7aea-4cae-bfc8-509fd923dceb req-79efc67a-5a35-43e9-9326-473208f30110 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:41:55 compute-0 nova_compute[117331]: 2025-10-09 16:41:55.039 2 DEBUG nova.compute.manager [req-70167012-7aea-4cae-bfc8-509fd923dceb req-79efc67a-5a35-43e9-9326-473208f30110 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] No waiting events found dispatching network-vif-plugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:41:55 compute-0 nova_compute[117331]: 2025-10-09 16:41:55.040 2 WARNING nova.compute.manager [req-70167012-7aea-4cae-bfc8-509fd923dceb req-79efc67a-5a35-43e9-9326-473208f30110 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Received unexpected event network-vif-plugged-3bf58985-c8d6-4c25-bed7-5fe8a78a1239 for instance with vm_state active and task_state migrating.
Oct 09 16:41:55 compute-0 podman[154468]: 2025-10-09 16:41:55.856107059 +0000 UTC m=+0.079442786 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, release=1755695350)
Oct 09 16:41:55 compute-0 podman[154469]: 2025-10-09 16:41:55.867231992 +0000 UTC m=+0.096362072 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 09 16:41:57 compute-0 nova_compute[117331]: 2025-10-09 16:41:57.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:58 compute-0 nova_compute[117331]: 2025-10-09 16:41:58.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:41:59 compute-0 podman[127775]: time="2025-10-09T16:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:41:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:41:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3031 "" "Go-http-client/1.1"
Oct 09 16:42:00 compute-0 nova_compute[117331]: 2025-10-09 16:42:00.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:42:00 compute-0 nova_compute[117331]: 2025-10-09 16:42:00.307 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:42:00 compute-0 nova_compute[117331]: 2025-10-09 16:42:00.308 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:42:00 compute-0 nova_compute[117331]: 2025-10-09 16:42:00.308 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:42:00 compute-0 nova_compute[117331]: 2025-10-09 16:42:00.309 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:42:00 compute-0 nova_compute[117331]: 2025-10-09 16:42:00.309 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:42:00 compute-0 nova_compute[117331]: 2025-10-09 16:42:00.309 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:42:01 compute-0 nova_compute[117331]: 2025-10-09 16:42:01.327 2 DEBUG nova.virt.libvirt.imagecache [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagecache.py:314
Oct 09 16:42:01 compute-0 nova_compute[117331]: 2025-10-09 16:42:01.328 2 DEBUG nova.virt.libvirt.imagecache [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Image id b7d6e0af-25e4-4227-9dc6-43143898ceee yields fingerprint cea3dacfdea0f3734ae526b812744e847bc2d356 _age_and_verify_cached_images /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 09 16:42:01 compute-0 nova_compute[117331]: 2025-10-09 16:42:01.328 2 INFO nova.virt.libvirt.imagecache [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] image b7d6e0af-25e4-4227-9dc6-43143898ceee at (/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356): checking
Oct 09 16:42:01 compute-0 nova_compute[117331]: 2025-10-09 16:42:01.328 2 DEBUG nova.virt.libvirt.imagecache [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] image b7d6e0af-25e4-4227-9dc6-43143898ceee at (/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356): image is in use _mark_in_use /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagecache.py:279
Oct 09 16:42:01 compute-0 nova_compute[117331]: 2025-10-09 16:42:01.331 2 DEBUG nova.virt.libvirt.imagecache [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 09 16:42:01 compute-0 nova_compute[117331]: 2025-10-09 16:42:01.332 2 INFO nova.virt.libvirt.imagecache [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Active base files: /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356
Oct 09 16:42:01 compute-0 nova_compute[117331]: 2025-10-09 16:42:01.333 2 DEBUG nova.virt.libvirt.imagecache [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagecache.py:350
Oct 09 16:42:01 compute-0 nova_compute[117331]: 2025-10-09 16:42:01.333 2 DEBUG nova.virt.libvirt.imagecache [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagecache.py:299
Oct 09 16:42:01 compute-0 nova_compute[117331]: 2025-10-09 16:42:01.334 2 DEBUG nova.virt.libvirt.imagecache [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagecache.py:284
Oct 09 16:42:01 compute-0 openstack_network_exporter[129925]: ERROR   16:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:42:01 compute-0 openstack_network_exporter[129925]: ERROR   16:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:42:01 compute-0 openstack_network_exporter[129925]: ERROR   16:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:42:01 compute-0 openstack_network_exporter[129925]: ERROR   16:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:42:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:42:01 compute-0 openstack_network_exporter[129925]: ERROR   16:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:42:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:42:02 compute-0 sshd-session[153931]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:42:02 compute-0 sshd-session[153931]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43
Oct 09 16:42:02 compute-0 nova_compute[117331]: 2025-10-09 16:42:02.450 2 DEBUG oslo_concurrency.lockutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:42:02 compute-0 nova_compute[117331]: 2025-10-09 16:42:02.451 2 DEBUG oslo_concurrency.lockutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:42:02 compute-0 nova_compute[117331]: 2025-10-09 16:42:02.452 2 DEBUG oslo_concurrency.lockutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "e49a1e89-0c38-4887-9bdf-64f07982ab22-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:42:02 compute-0 nova_compute[117331]: 2025-10-09 16:42:02.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:03 compute-0 nova_compute[117331]: 2025-10-09 16:42:03.132 2 DEBUG oslo_concurrency.lockutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:42:03 compute-0 nova_compute[117331]: 2025-10-09 16:42:03.133 2 DEBUG oslo_concurrency.lockutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:42:03 compute-0 nova_compute[117331]: 2025-10-09 16:42:03.133 2 DEBUG oslo_concurrency.lockutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:42:03 compute-0 nova_compute[117331]: 2025-10-09 16:42:03.134 2 DEBUG nova.compute.resource_tracker [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:42:03 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:03.148 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:42:03 compute-0 nova_compute[117331]: 2025-10-09 16:42:03.301 2 WARNING nova.virt.libvirt.driver [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:42:03 compute-0 nova_compute[117331]: 2025-10-09 16:42:03.302 2 DEBUG oslo_concurrency.processutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:42:03 compute-0 nova_compute[117331]: 2025-10-09 16:42:03.331 2 DEBUG oslo_concurrency.processutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.029s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:42:03 compute-0 nova_compute[117331]: 2025-10-09 16:42:03.333 2 DEBUG nova.compute.resource_tracker [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6144MB free_disk=73.24531936645508GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:42:03 compute-0 nova_compute[117331]: 2025-10-09 16:42:03.333 2 DEBUG oslo_concurrency.lockutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:42:03 compute-0 nova_compute[117331]: 2025-10-09 16:42:03.334 2 DEBUG oslo_concurrency.lockutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:42:03 compute-0 nova_compute[117331]: 2025-10-09 16:42:03.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:04 compute-0 sshd-session[153931]: Failed password for invalid user support from 124.60.67.43 port 45482 ssh2
Oct 09 16:42:04 compute-0 nova_compute[117331]: 2025-10-09 16:42:04.357 2 DEBUG nova.compute.resource_tracker [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration for instance e49a1e89-0c38-4887-9bdf-64f07982ab22 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 09 16:42:04 compute-0 nova_compute[117331]: 2025-10-09 16:42:04.865 2 DEBUG nova.compute.resource_tracker [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Oct 09 16:42:04 compute-0 nova_compute[117331]: 2025-10-09 16:42:04.896 2 DEBUG nova.compute.resource_tracker [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration 789c03a1-9329-4a25-b1ed-48eab8eb714b is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:42:04 compute-0 nova_compute[117331]: 2025-10-09 16:42:04.897 2 DEBUG nova.compute.resource_tracker [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:42:04 compute-0 nova_compute[117331]: 2025-10-09 16:42:04.897 2 DEBUG nova.compute.resource_tracker [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:42:03 up 51 min,  0 user,  load average: 0.51, 0.48, 0.46\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:42:04 compute-0 nova_compute[117331]: 2025-10-09 16:42:04.932 2 DEBUG nova.compute.provider_tree [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:42:05 compute-0 sshd-session[153931]: Connection closed by invalid user support 124.60.67.43 port 45482 [preauth]
Oct 09 16:42:05 compute-0 nova_compute[117331]: 2025-10-09 16:42:05.441 2 DEBUG nova.scheduler.client.report [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:42:05 compute-0 nova_compute[117331]: 2025-10-09 16:42:05.956 2 DEBUG nova.compute.resource_tracker [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:42:05 compute-0 nova_compute[117331]: 2025-10-09 16:42:05.957 2 DEBUG oslo_concurrency.lockutils [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.623s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:42:05 compute-0 nova_compute[117331]: 2025-10-09 16:42:05.973 2 INFO nova.compute.manager [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Oct 09 16:42:06 compute-0 podman[154520]: 2025-10-09 16:42:06.840114915 +0000 UTC m=+0.071044769 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 09 16:42:07 compute-0 nova_compute[117331]: 2025-10-09 16:42:07.038 2 INFO nova.scheduler.client.report [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Deleted allocation for migration 789c03a1-9329-4a25-b1ed-48eab8eb714b
Oct 09 16:42:07 compute-0 nova_compute[117331]: 2025-10-09 16:42:07.039 2 DEBUG nova.virt.libvirt.driver [None req-9c6f36d9-f824-403c-8831-426b13d31fa0 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: e49a1e89-0c38-4887-9bdf-64f07982ab22] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Oct 09 16:42:07 compute-0 nova_compute[117331]: 2025-10-09 16:42:07.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:08 compute-0 nova_compute[117331]: 2025-10-09 16:42:08.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:10 compute-0 podman[154541]: 2025-10-09 16:42:10.841573938 +0000 UTC m=+0.071485972 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 09 16:42:12 compute-0 nova_compute[117331]: 2025-10-09 16:42:12.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:13 compute-0 nova_compute[117331]: 2025-10-09 16:42:13.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:16 compute-0 podman[154565]: 2025-10-09 16:42:16.867202243 +0000 UTC m=+0.089098722 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:42:16 compute-0 podman[154566]: 2025-10-09 16:42:16.901802253 +0000 UTC m=+0.119237861 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 09 16:42:17 compute-0 nova_compute[117331]: 2025-10-09 16:42:17.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:18 compute-0 nova_compute[117331]: 2025-10-09 16:42:18.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:22 compute-0 nova_compute[117331]: 2025-10-09 16:42:22.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:23 compute-0 sshd-session[154518]: Invalid user ubnt from 124.60.67.43 port 53432
Oct 09 16:42:23 compute-0 sshd-session[154518]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:42:23 compute-0 sshd-session[154518]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43
Oct 09 16:42:23 compute-0 nova_compute[117331]: 2025-10-09 16:42:23.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:24 compute-0 sshd-session[154518]: Failed password for invalid user ubnt from 124.60.67.43 port 53432 ssh2
Oct 09 16:42:25 compute-0 sshd-session[154518]: Connection closed by invalid user ubnt 124.60.67.43 port 53432 [preauth]
Oct 09 16:42:26 compute-0 podman[154609]: 2025-10-09 16:42:26.86681554 +0000 UTC m=+0.086447518 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public)
Oct 09 16:42:26 compute-0 podman[154610]: 2025-10-09 16:42:26.898953142 +0000 UTC m=+0.111615838 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_controller, io.buildah.version=1.41.4)
Oct 09 16:42:27 compute-0 nova_compute[117331]: 2025-10-09 16:42:27.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:28 compute-0 nova_compute[117331]: 2025-10-09 16:42:28.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:28 compute-0 sshd-session[154607]: Invalid user user from 124.60.67.43 port 43600
Oct 09 16:42:29 compute-0 podman[127775]: time="2025-10-09T16:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:42:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:42:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3034 "" "Go-http-client/1.1"
Oct 09 16:42:31 compute-0 openstack_network_exporter[129925]: ERROR   16:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:42:31 compute-0 openstack_network_exporter[129925]: ERROR   16:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:42:31 compute-0 openstack_network_exporter[129925]: ERROR   16:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:42:31 compute-0 openstack_network_exporter[129925]: ERROR   16:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:42:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:42:31 compute-0 openstack_network_exporter[129925]: ERROR   16:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:42:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:42:31 compute-0 sshd-session[154607]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:42:31 compute-0 sshd-session[154607]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43
Oct 09 16:42:32 compute-0 nova_compute[117331]: 2025-10-09 16:42:32.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:33 compute-0 nova_compute[117331]: 2025-10-09 16:42:33.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:33 compute-0 sshd-session[154607]: Failed password for invalid user user from 124.60.67.43 port 43600 ssh2
Oct 09 16:42:35 compute-0 nova_compute[117331]: 2025-10-09 16:42:35.047 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Acquiring lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:42:35 compute-0 nova_compute[117331]: 2025-10-09 16:42:35.047 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:42:35 compute-0 nova_compute[117331]: 2025-10-09 16:42:35.334 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:42:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:35.346 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:42:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:35.346 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:42:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:35.346 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:42:35 compute-0 nova_compute[117331]: 2025-10-09 16:42:35.554 2 DEBUG nova.compute.manager [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:42:35 compute-0 sshd-session[154607]: Connection closed by invalid user user 124.60.67.43 port 43600 [preauth]
Oct 09 16:42:36 compute-0 nova_compute[117331]: 2025-10-09 16:42:36.107 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:42:36 compute-0 nova_compute[117331]: 2025-10-09 16:42:36.107 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:42:36 compute-0 nova_compute[117331]: 2025-10-09 16:42:36.116 2 DEBUG nova.virt.hardware [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:42:36 compute-0 nova_compute[117331]: 2025-10-09 16:42:36.116 2 INFO nova.compute.claims [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:42:36 compute-0 nova_compute[117331]: 2025-10-09 16:42:36.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:42:37 compute-0 nova_compute[117331]: 2025-10-09 16:42:37.179 2 DEBUG nova.compute.provider_tree [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:42:37 compute-0 nova_compute[117331]: 2025-10-09 16:42:37.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:37 compute-0 nova_compute[117331]: 2025-10-09 16:42:37.688 2 DEBUG nova.scheduler.client.report [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:42:37 compute-0 podman[154659]: 2025-10-09 16:42:37.867677392 +0000 UTC m=+0.087234273 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:42:38 compute-0 nova_compute[117331]: 2025-10-09 16:42:38.210 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.102s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:42:38 compute-0 nova_compute[117331]: 2025-10-09 16:42:38.211 2 DEBUG nova.compute.manager [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:42:38 compute-0 nova_compute[117331]: 2025-10-09 16:42:38.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:38 compute-0 nova_compute[117331]: 2025-10-09 16:42:38.719 2 DEBUG nova.compute.manager [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:42:38 compute-0 nova_compute[117331]: 2025-10-09 16:42:38.720 2 DEBUG nova.network.neutron [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:42:38 compute-0 nova_compute[117331]: 2025-10-09 16:42:38.720 2 WARNING neutronclient.v2_0.client [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:42:38 compute-0 nova_compute[117331]: 2025-10-09 16:42:38.720 2 WARNING neutronclient.v2_0.client [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:42:39 compute-0 nova_compute[117331]: 2025-10-09 16:42:39.227 2 INFO nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:42:39 compute-0 nova_compute[117331]: 2025-10-09 16:42:39.303 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:42:39 compute-0 nova_compute[117331]: 2025-10-09 16:42:39.453 2 DEBUG nova.network.neutron [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Successfully created port: 75dea49d-ac7b-45ed-8521-44a081d00648 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:42:39 compute-0 nova_compute[117331]: 2025-10-09 16:42:39.736 2 DEBUG nova.compute.manager [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.761 2 DEBUG nova.compute.manager [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.763 2 DEBUG nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.763 2 INFO nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Creating image(s)
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.764 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Acquiring lock "/var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.764 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "/var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.765 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "/var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.765 2 DEBUG oslo_utils.imageutils.format_inspector [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.770 2 DEBUG oslo_utils.imageutils.format_inspector [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.773 2 DEBUG oslo_concurrency.processutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.833 2 DEBUG oslo_concurrency.processutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.835 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.836 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.837 2 DEBUG oslo_utils.imageutils.format_inspector [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.843 2 DEBUG oslo_utils.imageutils.format_inspector [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.844 2 DEBUG oslo_concurrency.processutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.905 2 DEBUG oslo_concurrency.processutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.906 2 DEBUG oslo_concurrency.processutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.941 2 DEBUG oslo_concurrency.processutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk 1073741824" returned: 0 in 0.035s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.942 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.106s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.942 2 DEBUG oslo_concurrency.processutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.996 2 DEBUG oslo_concurrency.processutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.998 2 DEBUG nova.virt.disk.api [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Checking if we can resize image /var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:42:40 compute-0 nova_compute[117331]: 2025-10-09 16:42:40.998 2 DEBUG oslo_concurrency.processutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.052 2 DEBUG oslo_concurrency.processutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.054 2 DEBUG nova.virt.disk.api [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Cannot resize image /var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.055 2 DEBUG nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.055 2 DEBUG nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Ensure instance console log exists: /var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.056 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.057 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.057 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.471 2 DEBUG nova.network.neutron [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Successfully updated port: 75dea49d-ac7b-45ed-8521-44a081d00648 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.515 2 DEBUG nova.compute.manager [req-7d6203f4-fadb-4687-ba08-18532bc11239 req-a23ac7b7-bf79-4139-b184-f31e82a8895e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received event network-changed-75dea49d-ac7b-45ed-8521-44a081d00648 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.516 2 DEBUG nova.compute.manager [req-7d6203f4-fadb-4687-ba08-18532bc11239 req-a23ac7b7-bf79-4139-b184-f31e82a8895e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Refreshing instance network info cache due to event network-changed-75dea49d-ac7b-45ed-8521-44a081d00648. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.516 2 DEBUG oslo_concurrency.lockutils [req-7d6203f4-fadb-4687-ba08-18532bc11239 req-a23ac7b7-bf79-4139-b184-f31e82a8895e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-31e9639e-ea7e-41f5-8bd3-2f0344062f99" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.516 2 DEBUG oslo_concurrency.lockutils [req-7d6203f4-fadb-4687-ba08-18532bc11239 req-a23ac7b7-bf79-4139-b184-f31e82a8895e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-31e9639e-ea7e-41f5-8bd3-2f0344062f99" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.516 2 DEBUG nova.network.neutron [req-7d6203f4-fadb-4687-ba08-18532bc11239 req-a23ac7b7-bf79-4139-b184-f31e82a8895e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Refreshing network info cache for port 75dea49d-ac7b-45ed-8521-44a081d00648 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.819 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.819 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.820 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:42:41 compute-0 podman[154694]: 2025-10-09 16:42:41.842160469 +0000 UTC m=+0.072132153 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.971 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.973 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:42:41 compute-0 nova_compute[117331]: 2025-10-09 16:42:41.982 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Acquiring lock "refresh_cache-31e9639e-ea7e-41f5-8bd3-2f0344062f99" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:42:42 compute-0 nova_compute[117331]: 2025-10-09 16:42:42.012 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.040s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:42:42 compute-0 nova_compute[117331]: 2025-10-09 16:42:42.013 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6167MB free_disk=73.2451400756836GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:42:42 compute-0 nova_compute[117331]: 2025-10-09 16:42:42.013 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:42:42 compute-0 nova_compute[117331]: 2025-10-09 16:42:42.013 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:42:42 compute-0 nova_compute[117331]: 2025-10-09 16:42:42.021 2 WARNING neutronclient.v2_0.client [req-7d6203f4-fadb-4687-ba08-18532bc11239 req-a23ac7b7-bf79-4139-b184-f31e82a8895e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:42:42 compute-0 nova_compute[117331]: 2025-10-09 16:42:42.104 2 DEBUG nova.network.neutron [req-7d6203f4-fadb-4687-ba08-18532bc11239 req-a23ac7b7-bf79-4139-b184-f31e82a8895e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:42:42 compute-0 nova_compute[117331]: 2025-10-09 16:42:42.310 2 DEBUG nova.network.neutron [req-7d6203f4-fadb-4687-ba08-18532bc11239 req-a23ac7b7-bf79-4139-b184-f31e82a8895e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:42:42 compute-0 nova_compute[117331]: 2025-10-09 16:42:42.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:42 compute-0 nova_compute[117331]: 2025-10-09 16:42:42.816 2 DEBUG oslo_concurrency.lockutils [req-7d6203f4-fadb-4687-ba08-18532bc11239 req-a23ac7b7-bf79-4139-b184-f31e82a8895e ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-31e9639e-ea7e-41f5-8bd3-2f0344062f99" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:42:42 compute-0 nova_compute[117331]: 2025-10-09 16:42:42.818 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Acquired lock "refresh_cache-31e9639e-ea7e-41f5-8bd3-2f0344062f99" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:42:42 compute-0 nova_compute[117331]: 2025-10-09 16:42:42.818 2 DEBUG nova.network.neutron [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:42:43 compute-0 nova_compute[117331]: 2025-10-09 16:42:43.057 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance 31e9639e-ea7e-41f5-8bd3-2f0344062f99 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:42:43 compute-0 nova_compute[117331]: 2025-10-09 16:42:43.058 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:42:43 compute-0 nova_compute[117331]: 2025-10-09 16:42:43.059 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:42:42 up 51 min,  0 user,  load average: 0.28, 0.43, 0.44\n', 'num_instances': '1', 'num_vm_building': '1', 'num_task_spawning': '1', 'num_os_type_None': '1', 'num_proj_fca25ca2f317463c909e30c8b2c188d1': '1', 'io_workload': '1'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:42:43 compute-0 nova_compute[117331]: 2025-10-09 16:42:43.103 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:42:43 compute-0 nova_compute[117331]: 2025-10-09 16:42:43.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:43 compute-0 nova_compute[117331]: 2025-10-09 16:42:43.474 2 DEBUG nova.network.neutron [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:42:43 compute-0 nova_compute[117331]: 2025-10-09 16:42:43.611 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:42:43 compute-0 nova_compute[117331]: 2025-10-09 16:42:43.677 2 WARNING neutronclient.v2_0.client [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:42:43 compute-0 nova_compute[117331]: 2025-10-09 16:42:43.813 2 DEBUG nova.network.neutron [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Updating instance_info_cache with network_info: [{"id": "75dea49d-ac7b-45ed-8521-44a081d00648", "address": "fa:16:3e:a9:96:a6", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75dea49d-ac", "ovs_interfaceid": "75dea49d-ac7b-45ed-8521-44a081d00648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.127 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.127 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.114s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.320 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Releasing lock "refresh_cache-31e9639e-ea7e-41f5-8bd3-2f0344062f99" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.321 2 DEBUG nova.compute.manager [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Instance network_info: |[{"id": "75dea49d-ac7b-45ed-8521-44a081d00648", "address": "fa:16:3e:a9:96:a6", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75dea49d-ac", "ovs_interfaceid": "75dea49d-ac7b-45ed-8521-44a081d00648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.325 2 DEBUG nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Start _get_guest_xml network_info=[{"id": "75dea49d-ac7b-45ed-8521-44a081d00648", "address": "fa:16:3e:a9:96:a6", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75dea49d-ac", "ovs_interfaceid": "75dea49d-ac7b-45ed-8521-44a081d00648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.331 2 WARNING nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.333 2 DEBUG nova.virt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteZoneMigrationStrategy-server-1385446209', uuid='31e9639e-ea7e-41f5-8bd3-2f0344062f99'), owner=OwnerMeta(userid='05384d72bc894b20b1cc128bf76382b2', username='tempest-TestExecuteZoneMigrationStrategy-1067041161-project-admin', projectid='fca25ca2f317463c909e30c8b2c188d1', projectname='tempest-TestExecuteZoneMigrationStrategy-1067041161'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "75dea49d-ac7b-45ed-8521-44a081d00648", "address": "fa:16:3e:a9:96:a6", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75dea49d-ac", "ovs_interfaceid": "75dea49d-ac7b-45ed-8521-44a081d00648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760028164.3334568) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.338 2 DEBUG nova.virt.libvirt.host [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.339 2 DEBUG nova.virt.libvirt.host [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.342 2 DEBUG nova.virt.libvirt.host [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.343 2 DEBUG nova.virt.libvirt.host [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.344 2 DEBUG nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.344 2 DEBUG nova.virt.hardware [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.345 2 DEBUG nova.virt.hardware [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.345 2 DEBUG nova.virt.hardware [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.346 2 DEBUG nova.virt.hardware [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.346 2 DEBUG nova.virt.hardware [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.346 2 DEBUG nova.virt.hardware [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.347 2 DEBUG nova.virt.hardware [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.347 2 DEBUG nova.virt.hardware [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.348 2 DEBUG nova.virt.hardware [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.348 2 DEBUG nova.virt.hardware [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.348 2 DEBUG nova.virt.hardware [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.355 2 DEBUG nova.virt.libvirt.vif [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:42:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-1385446209',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-1385446209',id=32,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fca25ca2f317463c909e30c8b2c188d1',ramdisk_id='',reservation_id='r-a21ifu1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-1067041161',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-1067041161-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:42:39Z,user_data=None,user_id='05384d72bc894b20b1cc128bf76382b2',uuid=31e9639e-ea7e-41f5-8bd3-2f0344062f99,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75dea49d-ac7b-45ed-8521-44a081d00648", "address": "fa:16:3e:a9:96:a6", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75dea49d-ac", "ovs_interfaceid": "75dea49d-ac7b-45ed-8521-44a081d00648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.355 2 DEBUG nova.network.os_vif_util [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Converting VIF {"id": "75dea49d-ac7b-45ed-8521-44a081d00648", "address": "fa:16:3e:a9:96:a6", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75dea49d-ac", "ovs_interfaceid": "75dea49d-ac7b-45ed-8521-44a081d00648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.356 2 DEBUG nova.network.os_vif_util [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:96:a6,bridge_name='br-int',has_traffic_filtering=True,id=75dea49d-ac7b-45ed-8521-44a081d00648,network=Network(faa2f899-e3f1-48c6-ac37-859a6fb5c6cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75dea49d-ac') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.358 2 DEBUG nova.objects.instance [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 31e9639e-ea7e-41f5-8bd3-2f0344062f99 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.868 2 DEBUG nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:42:44 compute-0 nova_compute[117331]:   <uuid>31e9639e-ea7e-41f5-8bd3-2f0344062f99</uuid>
Oct 09 16:42:44 compute-0 nova_compute[117331]:   <name>instance-00000020</name>
Oct 09 16:42:44 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:42:44 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:42:44 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteZoneMigrationStrategy-server-1385446209</nova:name>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:42:44</nova:creationTime>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:42:44 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:42:44 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:42:44 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:42:44 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:42:44 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:42:44 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:42:44 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:42:44 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:42:44 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:42:44 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:42:44 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:42:44 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:42:44 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:42:44 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:42:44 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:42:44 compute-0 nova_compute[117331]:         <nova:user uuid="05384d72bc894b20b1cc128bf76382b2">tempest-TestExecuteZoneMigrationStrategy-1067041161-project-admin</nova:user>
Oct 09 16:42:44 compute-0 nova_compute[117331]:         <nova:project uuid="fca25ca2f317463c909e30c8b2c188d1">tempest-TestExecuteZoneMigrationStrategy-1067041161</nova:project>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:42:44 compute-0 nova_compute[117331]:         <nova:port uuid="75dea49d-ac7b-45ed-8521-44a081d00648">
Oct 09 16:42:44 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:42:44 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:42:44 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <system>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <entry name="serial">31e9639e-ea7e-41f5-8bd3-2f0344062f99</entry>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <entry name="uuid">31e9639e-ea7e-41f5-8bd3-2f0344062f99</entry>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     </system>
Oct 09 16:42:44 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:42:44 compute-0 nova_compute[117331]:   <os>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:   </os>
Oct 09 16:42:44 compute-0 nova_compute[117331]:   <features>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:   </features>
Oct 09 16:42:44 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:42:44 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:42:44 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk.config"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:a9:96:a6"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <target dev="tap75dea49d-ac"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/console.log" append="off"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <video>
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     </video>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:42:44 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:42:44 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:42:44 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:42:44 compute-0 nova_compute[117331]: </domain>
Oct 09 16:42:44 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.870 2 DEBUG nova.compute.manager [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Preparing to wait for external event network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.870 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Acquiring lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.871 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.871 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.873 2 DEBUG nova.virt.libvirt.vif [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:42:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-1385446209',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-1385446209',id=32,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fca25ca2f317463c909e30c8b2c188d1',ramdisk_id='',reservation_id='r-a21ifu1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-1067041161',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-1067041161-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:42:39Z,user_data=None,user_id='05384d72bc894b20b1cc128bf76382b2',uuid=31e9639e-ea7e-41f5-8bd3-2f0344062f99,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75dea49d-ac7b-45ed-8521-44a081d00648", "address": "fa:16:3e:a9:96:a6", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75dea49d-ac", "ovs_interfaceid": "75dea49d-ac7b-45ed-8521-44a081d00648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.873 2 DEBUG nova.network.os_vif_util [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Converting VIF {"id": "75dea49d-ac7b-45ed-8521-44a081d00648", "address": "fa:16:3e:a9:96:a6", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75dea49d-ac", "ovs_interfaceid": "75dea49d-ac7b-45ed-8521-44a081d00648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.874 2 DEBUG nova.network.os_vif_util [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:96:a6,bridge_name='br-int',has_traffic_filtering=True,id=75dea49d-ac7b-45ed-8521-44a081d00648,network=Network(faa2f899-e3f1-48c6-ac37-859a6fb5c6cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75dea49d-ac') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.875 2 DEBUG os_vif [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:96:a6,bridge_name='br-int',has_traffic_filtering=True,id=75dea49d-ac7b-45ed-8521-44a081d00648,network=Network(faa2f899-e3f1-48c6-ac37-859a6fb5c6cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75dea49d-ac') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.877 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.878 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.880 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'ad376efd-3965-5506-9053-edadc3fdb902', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.889 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75dea49d-ac, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.890 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap75dea49d-ac, col_values=(('qos', UUID('9da3cdba-ee13-497b-a87c-364d229d1ec5')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.891 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap75dea49d-ac, col_values=(('external_ids', {'iface-id': '75dea49d-ac7b-45ed-8521-44a081d00648', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:96:a6', 'vm-uuid': '31e9639e-ea7e-41f5-8bd3-2f0344062f99'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:42:44 compute-0 NetworkManager[1028]: <info>  [1760028164.8938] manager: (tap75dea49d-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:44 compute-0 nova_compute[117331]: 2025-10-09 16:42:44.903 2 INFO os_vif [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:96:a6,bridge_name='br-int',has_traffic_filtering=True,id=75dea49d-ac7b-45ed-8521-44a081d00648,network=Network(faa2f899-e3f1-48c6-ac37-859a6fb5c6cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75dea49d-ac')
Oct 09 16:42:45 compute-0 nova_compute[117331]: 2025-10-09 16:42:45.128 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:42:45 compute-0 nova_compute[117331]: 2025-10-09 16:42:45.128 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:42:46 compute-0 nova_compute[117331]: 2025-10-09 16:42:46.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:42:46 compute-0 nova_compute[117331]: 2025-10-09 16:42:46.443 2 DEBUG nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:42:46 compute-0 nova_compute[117331]: 2025-10-09 16:42:46.445 2 DEBUG nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:42:46 compute-0 nova_compute[117331]: 2025-10-09 16:42:46.445 2 DEBUG nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] No VIF found with MAC fa:16:3e:a9:96:a6, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:42:46 compute-0 nova_compute[117331]: 2025-10-09 16:42:46.446 2 INFO nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Using config drive
Oct 09 16:42:46 compute-0 nova_compute[117331]: 2025-10-09 16:42:46.959 2 WARNING neutronclient.v2_0.client [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:42:47 compute-0 nova_compute[117331]: 2025-10-09 16:42:47.303 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:42:47 compute-0 nova_compute[117331]: 2025-10-09 16:42:47.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:47 compute-0 nova_compute[117331]: 2025-10-09 16:42:47.557 2 INFO nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Creating config drive at /var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk.config
Oct 09 16:42:47 compute-0 nova_compute[117331]: 2025-10-09 16:42:47.568 2 DEBUG oslo_concurrency.processutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmp0p0njdv9 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:42:47 compute-0 nova_compute[117331]: 2025-10-09 16:42:47.714 2 DEBUG oslo_concurrency.processutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmp0p0njdv9" returned: 0 in 0.145s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:42:47 compute-0 kernel: tap75dea49d-ac: entered promiscuous mode
Oct 09 16:42:47 compute-0 ovn_controller[19752]: 2025-10-09T16:42:47Z|00289|binding|INFO|Claiming lport 75dea49d-ac7b-45ed-8521-44a081d00648 for this chassis.
Oct 09 16:42:47 compute-0 ovn_controller[19752]: 2025-10-09T16:42:47Z|00290|binding|INFO|75dea49d-ac7b-45ed-8521-44a081d00648: Claiming fa:16:3e:a9:96:a6 10.100.0.9
Oct 09 16:42:47 compute-0 NetworkManager[1028]: <info>  [1760028167.8104] manager: (tap75dea49d-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/106)
Oct 09 16:42:47 compute-0 nova_compute[117331]: 2025-10-09 16:42:47.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:47 compute-0 nova_compute[117331]: 2025-10-09 16:42:47.817 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:42:47 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:47.819 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:96:a6 10.100.0.9'], port_security=['fa:16:3e:a9:96:a6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '31e9639e-ea7e-41f5-8bd3-2f0344062f99', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fca25ca2f317463c909e30c8b2c188d1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a3264ad7-df47-48af-b997-2022c89ca53a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a2d5ec5-dae5-4df8-b843-efe172a6e533, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=75dea49d-ac7b-45ed-8521-44a081d00648) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:42:47 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:47.820 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 75dea49d-ac7b-45ed-8521-44a081d00648 in datapath faa2f899-e3f1-48c6-ac37-859a6fb5c6cc bound to our chassis
Oct 09 16:42:47 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:47.822 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network faa2f899-e3f1-48c6-ac37-859a6fb5c6cc
Oct 09 16:42:47 compute-0 ovn_controller[19752]: 2025-10-09T16:42:47Z|00291|binding|INFO|Setting lport 75dea49d-ac7b-45ed-8521-44a081d00648 ovn-installed in OVS
Oct 09 16:42:47 compute-0 ovn_controller[19752]: 2025-10-09T16:42:47Z|00292|binding|INFO|Setting lport 75dea49d-ac7b-45ed-8521-44a081d00648 up in Southbound
Oct 09 16:42:47 compute-0 nova_compute[117331]: 2025-10-09 16:42:47.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:47 compute-0 nova_compute[117331]: 2025-10-09 16:42:47.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:47 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:47.842 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[69db092e-f29d-4077-924e-aef55bf5df90]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:42:47 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:47.843 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfaa2f899-e1 in ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 09 16:42:47 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:47.844 139687 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfaa2f899-e0 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 09 16:42:47 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:47.844 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[faf03e1f-3a17-46e9-b09d-1458429c2f88]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:42:47 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:47.845 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a745c29d-04db-4c13-8a23-9cd14fb63c86]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:42:47 compute-0 systemd-udevd[154753]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:42:47 compute-0 systemd-machined[77487]: New machine qemu-25-instance-00000020.
Oct 09 16:42:47 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:47.860 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[3434a583-d309-4a24-a75c-2e83863cbf71]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:42:47 compute-0 NetworkManager[1028]: <info>  [1760028167.8656] device (tap75dea49d-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:42:47 compute-0 NetworkManager[1028]: <info>  [1760028167.8665] device (tap75dea49d-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:42:47 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:47.867 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[7e865517-d512-4eeb-a19c-c65c894df2d2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:42:47 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000020.
Oct 09 16:42:47 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:47.900 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb06812-c2e4-4738-b7cb-1826feccc416]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:42:47 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:47.905 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[850b2d18-72a7-4380-b071-19e1bcb95740]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:42:47 compute-0 NetworkManager[1028]: <info>  [1760028167.9066] manager: (tapfaa2f899-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/107)
Oct 09 16:42:47 compute-0 podman[154729]: 2025-10-09 16:42:47.919387483 +0000 UTC m=+0.126796099 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 09 16:42:47 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:47.941 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[a1bae74d-2ce7-44d1-ac14-7e99f3438d59]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:42:47 compute-0 podman[154731]: 2025-10-09 16:42:47.943527841 +0000 UTC m=+0.138345077 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251007)
Oct 09 16:42:47 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:47.944 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[c60cfb39-901e-4e44-bbf2-f88bf5240e6a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:42:47 compute-0 NetworkManager[1028]: <info>  [1760028167.9693] device (tapfaa2f899-e0): carrier: link connected
Oct 09 16:42:47 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:47.976 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[9f519d60-6637-4fed-ac2c-24b2dfb7ee77]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:47.999 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3820d4af-4713-4505-94fa-0c1223254d17]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfaa2f899-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:79:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 311157, 'reachable_time': 27856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 154807, 'error': None, 'target': 'ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:48.014 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[740e2492-4964-4fae-a82d-5d0cf30c2cd6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:7949'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 311157, 'tstamp': 311157}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 154808, 'error': None, 'target': 'ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:48.033 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[f4868ee6-498f-41a5-aaa3-ab6c403c5aa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfaa2f899-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:79:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 311157, 'reachable_time': 27856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 154809, 'error': None, 'target': 'ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:48.067 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[8961c6f4-bba3-4b33-8e92-a0279a8ff26e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:48.128 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[32f1b545-e617-48a7-a10e-909419be23f9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:48.130 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaa2f899-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:48.130 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:48.131 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfaa2f899-e0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:42:48 compute-0 NetworkManager[1028]: <info>  [1760028168.1334] manager: (tapfaa2f899-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Oct 09 16:42:48 compute-0 kernel: tapfaa2f899-e0: entered promiscuous mode
Oct 09 16:42:48 compute-0 nova_compute[117331]: 2025-10-09 16:42:48.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:48.142 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfaa2f899-e0, col_values=(('external_ids', {'iface-id': '71d0467d-0f77-43e1-a20d-172a49f9f770'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:42:48 compute-0 nova_compute[117331]: 2025-10-09 16:42:48.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:48 compute-0 ovn_controller[19752]: 2025-10-09T16:42:48Z|00293|binding|INFO|Releasing lport 71d0467d-0f77-43e1-a20d-172a49f9f770 from this chassis (sb_readonly=0)
Oct 09 16:42:48 compute-0 nova_compute[117331]: 2025-10-09 16:42:48.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:48.161 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[8b56c338-cf68-43c0-9389-bc864a03ee12]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:48.162 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:48.162 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:48.163 28613 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for faa2f899-e3f1-48c6-ac37-859a6fb5c6cc disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:48.163 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:48.164 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[f57942c4-ce0d-48c6-88ce-e1d66c811145]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:42:48 compute-0 nova_compute[117331]: 2025-10-09 16:42:48.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:48.166 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:48.167 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3343a7da-b832-4bf1-8375-8f18587f9c55]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:48.168 28613 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: global
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     log         /dev/log local0 debug
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     log-tag     haproxy-metadata-proxy-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     user        root
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     group       root
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     maxconn     1024
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     pidfile     /var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     daemon
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: defaults
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     log global
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     mode http
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     option httplog
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     option dontlognull
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     option http-server-close
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     option forwardfor
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     retries                 3
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     timeout http-request    30s
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     timeout connect         30s
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     timeout client          32s
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     timeout server          32s
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     timeout http-keep-alive 30s
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: listen listener
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     bind 169.254.169.254:80
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:     http-request add-header X-OVN-Network-ID faa2f899-e3f1-48c6-ac37-859a6fb5c6cc
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 09 16:42:48 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:48.169 28613 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'env', 'PROCESS_TAG=haproxy-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 09 16:42:48 compute-0 nova_compute[117331]: 2025-10-09 16:42:48.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:42:48 compute-0 nova_compute[117331]: 2025-10-09 16:42:48.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 09 16:42:48 compute-0 nova_compute[117331]: 2025-10-09 16:42:48.623 2 DEBUG nova.compute.manager [req-b477c15c-1ec4-4dda-99ec-f827cbba9ff4 req-62ca14b2-c598-402c-b587-0309db57516d ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received event network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:42:48 compute-0 nova_compute[117331]: 2025-10-09 16:42:48.624 2 DEBUG oslo_concurrency.lockutils [req-b477c15c-1ec4-4dda-99ec-f827cbba9ff4 req-62ca14b2-c598-402c-b587-0309db57516d ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:42:48 compute-0 nova_compute[117331]: 2025-10-09 16:42:48.625 2 DEBUG oslo_concurrency.lockutils [req-b477c15c-1ec4-4dda-99ec-f827cbba9ff4 req-62ca14b2-c598-402c-b587-0309db57516d ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:42:48 compute-0 nova_compute[117331]: 2025-10-09 16:42:48.625 2 DEBUG oslo_concurrency.lockutils [req-b477c15c-1ec4-4dda-99ec-f827cbba9ff4 req-62ca14b2-c598-402c-b587-0309db57516d ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:42:48 compute-0 nova_compute[117331]: 2025-10-09 16:42:48.625 2 DEBUG nova.compute.manager [req-b477c15c-1ec4-4dda-99ec-f827cbba9ff4 req-62ca14b2-c598-402c-b587-0309db57516d ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Processing event network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:42:48 compute-0 podman[154841]: 2025-10-09 16:42:48.639167716 +0000 UTC m=+0.089895888 container create e82e36b6d13e903c7a63064a3fbcc5731eb7e1705a6a1ef6aa0e714bf6069afa (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:42:48 compute-0 systemd[1]: Started libpod-conmon-e82e36b6d13e903c7a63064a3fbcc5731eb7e1705a6a1ef6aa0e714bf6069afa.scope.
Oct 09 16:42:48 compute-0 podman[154841]: 2025-10-09 16:42:48.59431045 +0000 UTC m=+0.045038722 image pull 4e15c189a95c5dcd1203c76fe304de5c8f8616da9a258ca168cc54a517f49b87 38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 09 16:42:48 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53bdfca7c784173200ba835c343cae764c285f15e713fac7f44d9e5bfca32170/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 16:42:48 compute-0 podman[154841]: 2025-10-09 16:42:48.710285546 +0000 UTC m=+0.161013818 container init e82e36b6d13e903c7a63064a3fbcc5731eb7e1705a6a1ef6aa0e714bf6069afa (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 09 16:42:48 compute-0 podman[154841]: 2025-10-09 16:42:48.723025171 +0000 UTC m=+0.173753353 container start e82e36b6d13e903c7a63064a3fbcc5731eb7e1705a6a1ef6aa0e714bf6069afa (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:42:48 compute-0 neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc[154862]: [NOTICE]   (154866) : New worker (154868) forked
Oct 09 16:42:48 compute-0 neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc[154862]: [NOTICE]   (154866) : Loading success.
Oct 09 16:42:48 compute-0 nova_compute[117331]: 2025-10-09 16:42:48.815 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 09 16:42:48 compute-0 nova_compute[117331]: 2025-10-09 16:42:48.816 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:42:49 compute-0 nova_compute[117331]: 2025-10-09 16:42:49.096 2 DEBUG nova.compute.manager [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:42:49 compute-0 nova_compute[117331]: 2025-10-09 16:42:49.100 2 DEBUG nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:42:49 compute-0 nova_compute[117331]: 2025-10-09 16:42:49.103 2 INFO nova.virt.libvirt.driver [-] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Instance spawned successfully.
Oct 09 16:42:49 compute-0 nova_compute[117331]: 2025-10-09 16:42:49.104 2 DEBUG nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:42:49 compute-0 nova_compute[117331]: 2025-10-09 16:42:49.619 2 DEBUG nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:42:49 compute-0 nova_compute[117331]: 2025-10-09 16:42:49.620 2 DEBUG nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:42:49 compute-0 nova_compute[117331]: 2025-10-09 16:42:49.621 2 DEBUG nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:42:49 compute-0 nova_compute[117331]: 2025-10-09 16:42:49.621 2 DEBUG nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:42:49 compute-0 nova_compute[117331]: 2025-10-09 16:42:49.622 2 DEBUG nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:42:49 compute-0 nova_compute[117331]: 2025-10-09 16:42:49.623 2 DEBUG nova.virt.libvirt.driver [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:42:49 compute-0 nova_compute[117331]: 2025-10-09 16:42:49.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:50 compute-0 nova_compute[117331]: 2025-10-09 16:42:50.134 2 INFO nova.compute.manager [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Took 9.37 seconds to spawn the instance on the hypervisor.
Oct 09 16:42:50 compute-0 nova_compute[117331]: 2025-10-09 16:42:50.134 2 DEBUG nova.compute.manager [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:42:50 compute-0 nova_compute[117331]: 2025-10-09 16:42:50.671 2 INFO nova.compute.manager [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Took 14.61 seconds to build instance.
Oct 09 16:42:50 compute-0 nova_compute[117331]: 2025-10-09 16:42:50.697 2 DEBUG nova.compute.manager [req-03720fea-210f-487e-b8eb-d32a5780b0f9 req-7972630a-46d8-4efb-8829-ab371a07b048 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received event network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:42:50 compute-0 nova_compute[117331]: 2025-10-09 16:42:50.698 2 DEBUG oslo_concurrency.lockutils [req-03720fea-210f-487e-b8eb-d32a5780b0f9 req-7972630a-46d8-4efb-8829-ab371a07b048 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:42:50 compute-0 nova_compute[117331]: 2025-10-09 16:42:50.698 2 DEBUG oslo_concurrency.lockutils [req-03720fea-210f-487e-b8eb-d32a5780b0f9 req-7972630a-46d8-4efb-8829-ab371a07b048 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:42:50 compute-0 nova_compute[117331]: 2025-10-09 16:42:50.699 2 DEBUG oslo_concurrency.lockutils [req-03720fea-210f-487e-b8eb-d32a5780b0f9 req-7972630a-46d8-4efb-8829-ab371a07b048 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:42:50 compute-0 nova_compute[117331]: 2025-10-09 16:42:50.699 2 DEBUG nova.compute.manager [req-03720fea-210f-487e-b8eb-d32a5780b0f9 req-7972630a-46d8-4efb-8829-ab371a07b048 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] No waiting events found dispatching network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:42:50 compute-0 nova_compute[117331]: 2025-10-09 16:42:50.700 2 WARNING nova.compute.manager [req-03720fea-210f-487e-b8eb-d32a5780b0f9 req-7972630a-46d8-4efb-8829-ab371a07b048 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received unexpected event network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 for instance with vm_state active and task_state None.
Oct 09 16:42:51 compute-0 nova_compute[117331]: 2025-10-09 16:42:51.179 2 DEBUG oslo_concurrency.lockutils [None req-759fa734-0ef0-49db-bf6d-f0afc3bde2c9 05384d72bc894b20b1cc128bf76382b2 fca25ca2f317463c909e30c8b2c188d1 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.132s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:42:52 compute-0 nova_compute[117331]: 2025-10-09 16:42:52.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:54 compute-0 nova_compute[117331]: 2025-10-09 16:42:54.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:57 compute-0 nova_compute[117331]: 2025-10-09 16:42:57.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:57 compute-0 podman[154877]: 2025-10-09 16:42:57.836438766 +0000 UTC m=+0.065358308 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-type=git, maintainer=Red Hat, Inc., version=9.6, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 09 16:42:57 compute-0 podman[154878]: 2025-10-09 16:42:57.871468939 +0000 UTC m=+0.099598576 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 09 16:42:58 compute-0 sshd-session[154657]: Connection closed by authenticating user root 124.60.67.43 port 39842 [preauth]
Oct 09 16:42:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:58.727 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:42:58 compute-0 nova_compute[117331]: 2025-10-09 16:42:58.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:42:58 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:42:58.728 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:42:59 compute-0 podman[127775]: time="2025-10-09T16:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:42:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:42:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3492 "" "Go-http-client/1.1"
Oct 09 16:42:59 compute-0 nova_compute[117331]: 2025-10-09 16:42:59.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:00 compute-0 nova_compute[117331]: 2025-10-09 16:43:00.813 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:43:00 compute-0 nova_compute[117331]: 2025-10-09 16:43:00.813 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 09 16:43:01 compute-0 ovn_controller[19752]: 2025-10-09T16:43:01Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a9:96:a6 10.100.0.9
Oct 09 16:43:01 compute-0 ovn_controller[19752]: 2025-10-09T16:43:01Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:96:a6 10.100.0.9
Oct 09 16:43:01 compute-0 openstack_network_exporter[129925]: ERROR   16:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:43:01 compute-0 openstack_network_exporter[129925]: ERROR   16:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:43:01 compute-0 openstack_network_exporter[129925]: ERROR   16:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:43:01 compute-0 openstack_network_exporter[129925]: ERROR   16:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:43:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:43:01 compute-0 openstack_network_exporter[129925]: ERROR   16:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:43:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:43:02 compute-0 nova_compute[117331]: 2025-10-09 16:43:02.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:02 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:02.730 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:43:04 compute-0 nova_compute[117331]: 2025-10-09 16:43:04.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:06 compute-0 unix_chkpwd[154941]: password check failed for user (root)
Oct 09 16:43:06 compute-0 sshd-session[154939]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.217  user=root
Oct 09 16:43:07 compute-0 nova_compute[117331]: 2025-10-09 16:43:07.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:08 compute-0 sshd-session[154939]: Failed password for root from 193.46.255.217 port 56590 ssh2
Oct 09 16:43:08 compute-0 podman[154942]: 2025-10-09 16:43:08.866471878 +0000 UTC m=+0.069712516 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:43:09 compute-0 nova_compute[117331]: 2025-10-09 16:43:09.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:10 compute-0 unix_chkpwd[154962]: password check failed for user (root)
Oct 09 16:43:10 compute-0 sshd-session[154926]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43  user=root
Oct 09 16:43:10 compute-0 unix_chkpwd[154963]: password check failed for user (root)
Oct 09 16:43:12 compute-0 nova_compute[117331]: 2025-10-09 16:43:12.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:12 compute-0 podman[154964]: 2025-10-09 16:43:12.837543336 +0000 UTC m=+0.063692375 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 09 16:43:12 compute-0 sshd-session[154926]: Failed password for root from 124.60.67.43 port 42962 ssh2
Oct 09 16:43:13 compute-0 sshd-session[154939]: Failed password for root from 193.46.255.217 port 56590 ssh2
Oct 09 16:43:14 compute-0 nova_compute[117331]: 2025-10-09 16:43:14.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:15 compute-0 unix_chkpwd[154989]: password check failed for user (root)
Oct 09 16:43:17 compute-0 sshd-session[154939]: Failed password for root from 193.46.255.217 port 56590 ssh2
Oct 09 16:43:17 compute-0 sshd-session[154926]: Connection closed by authenticating user root 124.60.67.43 port 42962 [preauth]
Oct 09 16:43:17 compute-0 nova_compute[117331]: 2025-10-09 16:43:17.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:18 compute-0 podman[154992]: 2025-10-09 16:43:18.828351885 +0000 UTC m=+0.060986049 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251007)
Oct 09 16:43:18 compute-0 podman[154993]: 2025-10-09 16:43:18.862599543 +0000 UTC m=+0.090240648 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Oct 09 16:43:19 compute-0 sshd-session[154939]: Received disconnect from 193.46.255.217 port 56590:11:  [preauth]
Oct 09 16:43:19 compute-0 sshd-session[154939]: Disconnected from authenticating user root 193.46.255.217 port 56590 [preauth]
Oct 09 16:43:19 compute-0 sshd-session[154939]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.217  user=root
Oct 09 16:43:19 compute-0 nova_compute[117331]: 2025-10-09 16:43:19.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:20 compute-0 unix_chkpwd[155032]: password check failed for user (root)
Oct 09 16:43:20 compute-0 sshd-session[155030]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.217  user=root
Oct 09 16:43:22 compute-0 sshd-session[155030]: Failed password for root from 193.46.255.217 port 36010 ssh2
Oct 09 16:43:22 compute-0 unix_chkpwd[155033]: password check failed for user (root)
Oct 09 16:43:22 compute-0 nova_compute[117331]: 2025-10-09 16:43:22.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:22 compute-0 nova_compute[117331]: 2025-10-09 16:43:22.974 2 DEBUG nova.virt.libvirt.driver [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Check if temp file /var/lib/nova/instances/tmpbst17rr3 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Oct 09 16:43:22 compute-0 nova_compute[117331]: 2025-10-09 16:43:22.981 2 DEBUG nova.compute.manager [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbst17rr3',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='31e9639e-ea7e-41f5-8bd3-2f0344062f99',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Oct 09 16:43:24 compute-0 sshd-session[155030]: Failed password for root from 193.46.255.217 port 36010 ssh2
Oct 09 16:43:24 compute-0 nova_compute[117331]: 2025-10-09 16:43:24.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:26 compute-0 unix_chkpwd[155034]: password check failed for user (root)
Oct 09 16:43:27 compute-0 nova_compute[117331]: 2025-10-09 16:43:27.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:27 compute-0 nova_compute[117331]: 2025-10-09 16:43:27.697 2 DEBUG oslo_concurrency.processutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:43:27 compute-0 nova_compute[117331]: 2025-10-09 16:43:27.756 2 DEBUG oslo_concurrency.processutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:43:27 compute-0 nova_compute[117331]: 2025-10-09 16:43:27.757 2 DEBUG oslo_concurrency.processutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:43:27 compute-0 nova_compute[117331]: 2025-10-09 16:43:27.827 2 DEBUG oslo_concurrency.processutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:43:27 compute-0 nova_compute[117331]: 2025-10-09 16:43:27.828 2 DEBUG nova.compute.manager [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Preparing to wait for external event network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:43:27 compute-0 nova_compute[117331]: 2025-10-09 16:43:27.829 2 DEBUG oslo_concurrency.lockutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:43:27 compute-0 nova_compute[117331]: 2025-10-09 16:43:27.829 2 DEBUG oslo_concurrency.lockutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:43:27 compute-0 nova_compute[117331]: 2025-10-09 16:43:27.830 2 DEBUG oslo_concurrency.lockutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:43:28 compute-0 sshd-session[155030]: Failed password for root from 193.46.255.217 port 36010 ssh2
Oct 09 16:43:28 compute-0 podman[155041]: 2025-10-09 16:43:28.867174198 +0000 UTC m=+0.087925266 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Oct 09 16:43:28 compute-0 podman[155042]: 2025-10-09 16:43:28.98997052 +0000 UTC m=+0.206468342 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_controller, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 09 16:43:29 compute-0 sshd-session[154990]: Invalid user admin from 124.60.67.43 port 46378
Oct 09 16:43:29 compute-0 podman[127775]: time="2025-10-09T16:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:43:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:43:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3495 "" "Go-http-client/1.1"
Oct 09 16:43:29 compute-0 nova_compute[117331]: 2025-10-09 16:43:29.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:30 compute-0 sshd-session[154990]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:43:30 compute-0 sshd-session[154990]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43
Oct 09 16:43:30 compute-0 sshd-session[155030]: Received disconnect from 193.46.255.217 port 36010:11:  [preauth]
Oct 09 16:43:30 compute-0 sshd-session[155030]: Disconnected from authenticating user root 193.46.255.217 port 36010 [preauth]
Oct 09 16:43:30 compute-0 sshd-session[155030]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.217  user=root
Oct 09 16:43:31 compute-0 openstack_network_exporter[129925]: ERROR   16:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:43:31 compute-0 openstack_network_exporter[129925]: ERROR   16:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:43:31 compute-0 openstack_network_exporter[129925]: ERROR   16:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:43:31 compute-0 openstack_network_exporter[129925]: ERROR   16:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:43:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:43:31 compute-0 openstack_network_exporter[129925]: ERROR   16:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:43:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:43:31 compute-0 unix_chkpwd[155090]: password check failed for user (root)
Oct 09 16:43:31 compute-0 sshd-session[155088]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.217  user=root
Oct 09 16:43:32 compute-0 sshd-session[154990]: Failed password for invalid user admin from 124.60.67.43 port 46378 ssh2
Oct 09 16:43:32 compute-0 nova_compute[117331]: 2025-10-09 16:43:32.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:33 compute-0 sshd-session[155088]: Failed password for root from 193.46.255.217 port 17022 ssh2
Oct 09 16:43:33 compute-0 unix_chkpwd[155091]: password check failed for user (root)
Oct 09 16:43:34 compute-0 nova_compute[117331]: 2025-10-09 16:43:34.766 2 DEBUG nova.compute.manager [req-7b00259f-0d0b-4787-95a7-4f3702af8532 req-309e5ea6-07fc-4d5c-9c7a-a8f691f7fb24 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received event network-vif-unplugged-75dea49d-ac7b-45ed-8521-44a081d00648 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:43:34 compute-0 nova_compute[117331]: 2025-10-09 16:43:34.767 2 DEBUG oslo_concurrency.lockutils [req-7b00259f-0d0b-4787-95a7-4f3702af8532 req-309e5ea6-07fc-4d5c-9c7a-a8f691f7fb24 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:43:34 compute-0 nova_compute[117331]: 2025-10-09 16:43:34.767 2 DEBUG oslo_concurrency.lockutils [req-7b00259f-0d0b-4787-95a7-4f3702af8532 req-309e5ea6-07fc-4d5c-9c7a-a8f691f7fb24 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:43:34 compute-0 nova_compute[117331]: 2025-10-09 16:43:34.767 2 DEBUG oslo_concurrency.lockutils [req-7b00259f-0d0b-4787-95a7-4f3702af8532 req-309e5ea6-07fc-4d5c-9c7a-a8f691f7fb24 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:43:34 compute-0 nova_compute[117331]: 2025-10-09 16:43:34.768 2 DEBUG nova.compute.manager [req-7b00259f-0d0b-4787-95a7-4f3702af8532 req-309e5ea6-07fc-4d5c-9c7a-a8f691f7fb24 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] No event matching network-vif-unplugged-75dea49d-ac7b-45ed-8521-44a081d00648 in dict_keys([('network-vif-plugged', '75dea49d-ac7b-45ed-8521-44a081d00648')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Oct 09 16:43:34 compute-0 nova_compute[117331]: 2025-10-09 16:43:34.768 2 DEBUG nova.compute.manager [req-7b00259f-0d0b-4787-95a7-4f3702af8532 req-309e5ea6-07fc-4d5c-9c7a-a8f691f7fb24 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received event network-vif-unplugged-75dea49d-ac7b-45ed-8521-44a081d00648 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:43:34 compute-0 nova_compute[117331]: 2025-10-09 16:43:34.811 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:43:34 compute-0 nova_compute[117331]: 2025-10-09 16:43:34.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:35 compute-0 sshd-session[154990]: Connection closed by invalid user admin 124.60.67.43 port 46378 [preauth]
Oct 09 16:43:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:35.347 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:43:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:35.348 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:43:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:35.349 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:43:35 compute-0 ovn_controller[19752]: 2025-10-09T16:43:35Z|00294|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 09 16:43:36 compute-0 sshd-session[155088]: Failed password for root from 193.46.255.217 port 17022 ssh2
Oct 09 16:43:36 compute-0 nova_compute[117331]: 2025-10-09 16:43:36.830 2 DEBUG nova.compute.manager [req-d8f828b7-99b5-44c1-984a-be3b59d836e7 req-7c4f7f21-32cf-49dd-b642-6a88eeb65f5f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received event network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:43:36 compute-0 nova_compute[117331]: 2025-10-09 16:43:36.831 2 DEBUG oslo_concurrency.lockutils [req-d8f828b7-99b5-44c1-984a-be3b59d836e7 req-7c4f7f21-32cf-49dd-b642-6a88eeb65f5f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:43:36 compute-0 nova_compute[117331]: 2025-10-09 16:43:36.831 2 DEBUG oslo_concurrency.lockutils [req-d8f828b7-99b5-44c1-984a-be3b59d836e7 req-7c4f7f21-32cf-49dd-b642-6a88eeb65f5f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:43:36 compute-0 nova_compute[117331]: 2025-10-09 16:43:36.832 2 DEBUG oslo_concurrency.lockutils [req-d8f828b7-99b5-44c1-984a-be3b59d836e7 req-7c4f7f21-32cf-49dd-b642-6a88eeb65f5f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:43:36 compute-0 nova_compute[117331]: 2025-10-09 16:43:36.832 2 DEBUG nova.compute.manager [req-d8f828b7-99b5-44c1-984a-be3b59d836e7 req-7c4f7f21-32cf-49dd-b642-6a88eeb65f5f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Processing event network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:43:36 compute-0 nova_compute[117331]: 2025-10-09 16:43:36.833 2 DEBUG nova.compute.manager [req-d8f828b7-99b5-44c1-984a-be3b59d836e7 req-7c4f7f21-32cf-49dd-b642-6a88eeb65f5f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received event network-changed-75dea49d-ac7b-45ed-8521-44a081d00648 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:43:36 compute-0 nova_compute[117331]: 2025-10-09 16:43:36.833 2 DEBUG nova.compute.manager [req-d8f828b7-99b5-44c1-984a-be3b59d836e7 req-7c4f7f21-32cf-49dd-b642-6a88eeb65f5f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Refreshing instance network info cache due to event network-changed-75dea49d-ac7b-45ed-8521-44a081d00648. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:43:36 compute-0 nova_compute[117331]: 2025-10-09 16:43:36.834 2 DEBUG oslo_concurrency.lockutils [req-d8f828b7-99b5-44c1-984a-be3b59d836e7 req-7c4f7f21-32cf-49dd-b642-6a88eeb65f5f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-31e9639e-ea7e-41f5-8bd3-2f0344062f99" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:43:36 compute-0 nova_compute[117331]: 2025-10-09 16:43:36.834 2 DEBUG oslo_concurrency.lockutils [req-d8f828b7-99b5-44c1-984a-be3b59d836e7 req-7c4f7f21-32cf-49dd-b642-6a88eeb65f5f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-31e9639e-ea7e-41f5-8bd3-2f0344062f99" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:43:36 compute-0 nova_compute[117331]: 2025-10-09 16:43:36.834 2 DEBUG nova.network.neutron [req-d8f828b7-99b5-44c1-984a-be3b59d836e7 req-7c4f7f21-32cf-49dd-b642-6a88eeb65f5f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Refreshing network info cache for port 75dea49d-ac7b-45ed-8521-44a081d00648 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:43:37 compute-0 nova_compute[117331]: 2025-10-09 16:43:37.341 2 WARNING neutronclient.v2_0.client [req-d8f828b7-99b5-44c1-984a-be3b59d836e7 req-7c4f7f21-32cf-49dd-b642-6a88eeb65f5f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:43:37 compute-0 nova_compute[117331]: 2025-10-09 16:43:37.377 2 INFO nova.compute.manager [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Took 9.55 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Oct 09 16:43:37 compute-0 nova_compute[117331]: 2025-10-09 16:43:37.378 2 DEBUG nova.compute.manager [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:43:37 compute-0 nova_compute[117331]: 2025-10-09 16:43:37.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:37 compute-0 nova_compute[117331]: 2025-10-09 16:43:37.758 2 WARNING neutronclient.v2_0.client [req-d8f828b7-99b5-44c1-984a-be3b59d836e7 req-7c4f7f21-32cf-49dd-b642-6a88eeb65f5f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:43:37 compute-0 nova_compute[117331]: 2025-10-09 16:43:37.885 2 DEBUG nova.compute.manager [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbst17rr3',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='31e9639e-ea7e-41f5-8bd3-2f0344062f99',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(7e258fbc-dd52-4649-b774-c9432413d429),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Oct 09 16:43:37 compute-0 nova_compute[117331]: 2025-10-09 16:43:37.992 2 DEBUG nova.network.neutron [req-d8f828b7-99b5-44c1-984a-be3b59d836e7 req-7c4f7f21-32cf-49dd-b642-6a88eeb65f5f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Updated VIF entry in instance network info cache for port 75dea49d-ac7b-45ed-8521-44a081d00648. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Oct 09 16:43:37 compute-0 nova_compute[117331]: 2025-10-09 16:43:37.993 2 DEBUG nova.network.neutron [req-d8f828b7-99b5-44c1-984a-be3b59d836e7 req-7c4f7f21-32cf-49dd-b642-6a88eeb65f5f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Updating instance_info_cache with network_info: [{"id": "75dea49d-ac7b-45ed-8521-44a081d00648", "address": "fa:16:3e:a9:96:a6", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75dea49d-ac", "ovs_interfaceid": "75dea49d-ac7b-45ed-8521-44a081d00648", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:43:38 compute-0 unix_chkpwd[155093]: password check failed for user (root)
Oct 09 16:43:38 compute-0 nova_compute[117331]: 2025-10-09 16:43:38.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:43:38 compute-0 nova_compute[117331]: 2025-10-09 16:43:38.415 2 DEBUG nova.objects.instance [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lazy-loading 'migration_context' on Instance uuid 31e9639e-ea7e-41f5-8bd3-2f0344062f99 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:43:38 compute-0 nova_compute[117331]: 2025-10-09 16:43:38.416 2 DEBUG nova.virt.libvirt.driver [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Oct 09 16:43:38 compute-0 nova_compute[117331]: 2025-10-09 16:43:38.418 2 DEBUG nova.virt.libvirt.driver [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:43:38 compute-0 nova_compute[117331]: 2025-10-09 16:43:38.419 2 DEBUG nova.virt.libvirt.driver [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:43:38 compute-0 nova_compute[117331]: 2025-10-09 16:43:38.499 2 DEBUG oslo_concurrency.lockutils [req-d8f828b7-99b5-44c1-984a-be3b59d836e7 req-7c4f7f21-32cf-49dd-b642-6a88eeb65f5f ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-31e9639e-ea7e-41f5-8bd3-2f0344062f99" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:43:38 compute-0 nova_compute[117331]: 2025-10-09 16:43:38.921 2 DEBUG nova.virt.libvirt.driver [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 09 16:43:38 compute-0 nova_compute[117331]: 2025-10-09 16:43:38.922 2 DEBUG nova.virt.libvirt.driver [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 09 16:43:38 compute-0 nova_compute[117331]: 2025-10-09 16:43:38.928 2 DEBUG nova.virt.libvirt.vif [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:42:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-1385446209',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-1385446209',id=32,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:42:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fca25ca2f317463c909e30c8b2c188d1',ramdisk_id='',reservation_id='r-a21ifu1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-1067041161',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-1067041161-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:42:50Z,user_data=None,user_id='05384d72bc894b20b1cc128bf76382b2',uuid=31e9639e-ea7e-41f5-8bd3-2f0344062f99,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "75dea49d-ac7b-45ed-8521-44a081d00648", "address": "fa:16:3e:a9:96:a6", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap75dea49d-ac", "ovs_interfaceid": "75dea49d-ac7b-45ed-8521-44a081d00648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:43:38 compute-0 nova_compute[117331]: 2025-10-09 16:43:38.928 2 DEBUG nova.network.os_vif_util [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "75dea49d-ac7b-45ed-8521-44a081d00648", "address": "fa:16:3e:a9:96:a6", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap75dea49d-ac", "ovs_interfaceid": "75dea49d-ac7b-45ed-8521-44a081d00648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:43:38 compute-0 nova_compute[117331]: 2025-10-09 16:43:38.929 2 DEBUG nova.network.os_vif_util [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:96:a6,bridge_name='br-int',has_traffic_filtering=True,id=75dea49d-ac7b-45ed-8521-44a081d00648,network=Network(faa2f899-e3f1-48c6-ac37-859a6fb5c6cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75dea49d-ac') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:43:38 compute-0 nova_compute[117331]: 2025-10-09 16:43:38.929 2 DEBUG nova.virt.libvirt.migration [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Updating guest XML with vif config: <interface type="ethernet">
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <mac address="fa:16:3e:a9:96:a6"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <model type="virtio"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <mtu size="1442"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <target dev="tap75dea49d-ac"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]: </interface>
Oct 09 16:43:38 compute-0 nova_compute[117331]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Oct 09 16:43:38 compute-0 nova_compute[117331]: 2025-10-09 16:43:38.930 2 DEBUG nova.virt.libvirt.migration [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <name>instance-00000020</name>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <uuid>31e9639e-ea7e-41f5-8bd3-2f0344062f99</uuid>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteZoneMigrationStrategy-server-1385446209</nova:name>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:42:44</nova:creationTime>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:43:38 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:43:38 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:user uuid="05384d72bc894b20b1cc128bf76382b2">tempest-TestExecuteZoneMigrationStrategy-1067041161-project-admin</nova:user>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:project uuid="fca25ca2f317463c909e30c8b2c188d1">tempest-TestExecuteZoneMigrationStrategy-1067041161</nova:project>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:port uuid="75dea49d-ac7b-45ed-8521-44a081d00648">
Oct 09 16:43:38 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <system>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <entry name="serial">31e9639e-ea7e-41f5-8bd3-2f0344062f99</entry>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <entry name="uuid">31e9639e-ea7e-41f5-8bd3-2f0344062f99</entry>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </system>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <os>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </os>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <features>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </features>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk.config"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:a9:96:a6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap75dea49d-ac"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/console.log" append="off"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       </target>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/console.log" append="off"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </console>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </input>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <video>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </video>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]: </domain>
Oct 09 16:43:38 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Oct 09 16:43:38 compute-0 nova_compute[117331]: 2025-10-09 16:43:38.931 2 DEBUG nova.virt.libvirt.migration [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <name>instance-00000020</name>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <uuid>31e9639e-ea7e-41f5-8bd3-2f0344062f99</uuid>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteZoneMigrationStrategy-server-1385446209</nova:name>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:42:44</nova:creationTime>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:43:38 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:43:38 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:user uuid="05384d72bc894b20b1cc128bf76382b2">tempest-TestExecuteZoneMigrationStrategy-1067041161-project-admin</nova:user>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:project uuid="fca25ca2f317463c909e30c8b2c188d1">tempest-TestExecuteZoneMigrationStrategy-1067041161</nova:project>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:port uuid="75dea49d-ac7b-45ed-8521-44a081d00648">
Oct 09 16:43:38 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <system>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <entry name="serial">31e9639e-ea7e-41f5-8bd3-2f0344062f99</entry>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <entry name="uuid">31e9639e-ea7e-41f5-8bd3-2f0344062f99</entry>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </system>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <os>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </os>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <features>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </features>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk.config"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:a9:96:a6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap75dea49d-ac"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/console.log" append="off"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       </target>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/console.log" append="off"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </console>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </input>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <video>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </video>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]: </domain>
Oct 09 16:43:38 compute-0 nova_compute[117331]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Oct 09 16:43:38 compute-0 nova_compute[117331]: 2025-10-09 16:43:38.932 2 DEBUG nova.virt.libvirt.migration [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] _update_pci_xml output xml=<domain type="kvm">
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <name>instance-00000020</name>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <uuid>31e9639e-ea7e-41f5-8bd3-2f0344062f99</uuid>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteZoneMigrationStrategy-server-1385446209</nova:name>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:42:44</nova:creationTime>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:43:38 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:43:38 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:user uuid="05384d72bc894b20b1cc128bf76382b2">tempest-TestExecuteZoneMigrationStrategy-1067041161-project-admin</nova:user>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:project uuid="fca25ca2f317463c909e30c8b2c188d1">tempest-TestExecuteZoneMigrationStrategy-1067041161</nova:project>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <nova:port uuid="75dea49d-ac7b-45ed-8521-44a081d00648">
Oct 09 16:43:38 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <memory unit="KiB">131072</memory>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <vcpu placement="static">1</vcpu>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <resource>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <partition>/machine</partition>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </resource>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <system>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <entry name="serial">31e9639e-ea7e-41f5-8bd3-2f0344062f99</entry>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <entry name="uuid">31e9639e-ea7e-41f5-8bd3-2f0344062f99</entry>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </system>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <os>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </os>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <features>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <vmcoreinfo state="on"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </features>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <cpu mode="host-model" check="partial">
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <on_poweroff>destroy</on_poweroff>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <on_reboot>restart</on_reboot>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <on_crash>destroy</on_crash>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/disk.config"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <readonly/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="1" port="0x10"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="2" port="0x11"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="3" port="0x12"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="4" port="0x13"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="5" port="0x14"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="6" port="0x15"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="7" port="0x16"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="8" port="0x17"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="9" port="0x18"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="10" port="0x19"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="11" port="0x1a"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="12" port="0x1b"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="13" port="0x1c"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="14" port="0x1d"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="15" port="0x1e"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="16" port="0x1f"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="17" port="0x20"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="18" port="0x21"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="19" port="0x22"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="20" port="0x23"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="21" port="0x24"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="22" port="0x25"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="23" port="0x26"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="24" port="0x27"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-root-port"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target chassis="25" port="0x28"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model name="pcie-pci-bridge"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <controller type="sata" index="0">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </controller>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <interface type="ethernet"><mac address="fa:16:3e:a9:96:a6"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap75dea49d-ac"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </interface><serial type="pty">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/console.log" append="off"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target type="isa-serial" port="0">
Oct 09 16:43:38 compute-0 nova_compute[117331]:         <model name="isa-serial"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       </target>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <console type="pty">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99/console.log" append="off"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <target type="serial" port="0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </console>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="usb" bus="0" port="1"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </input>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <input type="mouse" bus="ps2"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <listen type="address" address="::"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </graphics>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <video>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <model type="virtio" heads="1" primary="yes"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </video>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:43:38 compute-0 nova_compute[117331]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:43:38 compute-0 nova_compute[117331]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 09 16:43:38 compute-0 nova_compute[117331]: </domain>
Oct 09 16:43:38 compute-0 nova_compute[117331]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Oct 09 16:43:38 compute-0 nova_compute[117331]: 2025-10-09 16:43:38.932 2 DEBUG nova.virt.libvirt.driver [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Oct 09 16:43:39 compute-0 nova_compute[117331]: 2025-10-09 16:43:39.425 2 DEBUG nova.virt.libvirt.migration [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Current None elapsed 1 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:43:39 compute-0 nova_compute[117331]: 2025-10-09 16:43:39.426 2 INFO nova.virt.libvirt.migration [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Increasing downtime to 50 ms after 0 sec elapsed time
Oct 09 16:43:39 compute-0 podman[155094]: 2025-10-09 16:43:39.875843517 +0000 UTC m=+0.101833587 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd)
Oct 09 16:43:39 compute-0 nova_compute[117331]: 2025-10-09 16:43:39.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:40 compute-0 nova_compute[117331]: 2025-10-09 16:43:40.446 2 INFO nova.virt.libvirt.driver [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Oct 09 16:43:40 compute-0 sshd-session[155088]: Failed password for root from 193.46.255.217 port 17022 ssh2
Oct 09 16:43:40 compute-0 nova_compute[117331]: 2025-10-09 16:43:40.951 2 DEBUG nova.virt.libvirt.migration [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Current 50 elapsed 2 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 09 16:43:40 compute-0 nova_compute[117331]: 2025-10-09 16:43:40.952 2 DEBUG nova.virt.libvirt.migration [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Oct 09 16:43:41 compute-0 kernel: tap75dea49d-ac (unregistering): left promiscuous mode
Oct 09 16:43:41 compute-0 NetworkManager[1028]: <info>  [1760028221.2472] device (tap75dea49d-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:43:41 compute-0 ovn_controller[19752]: 2025-10-09T16:43:41Z|00295|binding|INFO|Releasing lport 75dea49d-ac7b-45ed-8521-44a081d00648 from this chassis (sb_readonly=0)
Oct 09 16:43:41 compute-0 ovn_controller[19752]: 2025-10-09T16:43:41Z|00296|binding|INFO|Setting lport 75dea49d-ac7b-45ed-8521-44a081d00648 down in Southbound
Oct 09 16:43:41 compute-0 ovn_controller[19752]: 2025-10-09T16:43:41Z|00297|binding|INFO|Removing iface tap75dea49d-ac ovn-installed in OVS
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.267 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:96:a6 10.100.0.9'], port_security=['fa:16:3e:a9:96:a6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '2bd8bf21-1f6b-42c9-9656-9a72fa8dcbf3'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '31e9639e-ea7e-41f5-8bd3-2f0344062f99', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fca25ca2f317463c909e30c8b2c188d1', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'a3264ad7-df47-48af-b997-2022c89ca53a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a2d5ec5-dae5-4df8-b843-efe172a6e533, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=75dea49d-ac7b-45ed-8521-44a081d00648) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.268 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 75dea49d-ac7b-45ed-8521-44a081d00648 in datapath faa2f899-e3f1-48c6-ac37-859a6fb5c6cc unbound from our chassis
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.270 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network faa2f899-e3f1-48c6-ac37-859a6fb5c6cc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.271 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[cf3fa15e-cadf-4105-9c05-a1e333495982]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.272 28613 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc namespace which is not needed anymore
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.302 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:43:41 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000020.scope: Deactivated successfully.
Oct 09 16:43:41 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000020.scope: Consumed 15.047s CPU time.
Oct 09 16:43:41 compute-0 systemd-machined[77487]: Machine qemu-25-instance-00000020 terminated.
Oct 09 16:43:41 compute-0 neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc[154862]: [NOTICE]   (154866) : haproxy version is 3.0.5-8e879a5
Oct 09 16:43:41 compute-0 podman[155152]: 2025-10-09 16:43:41.4119505 +0000 UTC m=+0.034775367 container kill e82e36b6d13e903c7a63064a3fbcc5731eb7e1705a6a1ef6aa0e714bf6069afa (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:43:41 compute-0 neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc[154862]: [NOTICE]   (154866) : path to executable is /usr/sbin/haproxy
Oct 09 16:43:41 compute-0 neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc[154862]: [WARNING]  (154866) : Exiting Master process...
Oct 09 16:43:41 compute-0 neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc[154862]: [ALERT]    (154866) : Current worker (154868) exited with code 143 (Terminated)
Oct 09 16:43:41 compute-0 neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc[154862]: [WARNING]  (154866) : All workers exited. Exiting... (0)
Oct 09 16:43:41 compute-0 systemd[1]: libpod-e82e36b6d13e903c7a63064a3fbcc5731eb7e1705a6a1ef6aa0e714bf6069afa.scope: Deactivated successfully.
Oct 09 16:43:41 compute-0 kernel: tap75dea49d-ac: entered promiscuous mode
Oct 09 16:43:41 compute-0 ovn_controller[19752]: 2025-10-09T16:43:41Z|00298|binding|INFO|Claiming lport 75dea49d-ac7b-45ed-8521-44a081d00648 for this chassis.
Oct 09 16:43:41 compute-0 ovn_controller[19752]: 2025-10-09T16:43:41Z|00299|binding|INFO|75dea49d-ac7b-45ed-8521-44a081d00648: Claiming fa:16:3e:a9:96:a6 10.100.0.9
Oct 09 16:43:41 compute-0 systemd-udevd[155133]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:41 compute-0 NetworkManager[1028]: <info>  [1760028221.4545] manager: (tap75dea49d-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/109)
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.457 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:96:a6 10.100.0.9'], port_security=['fa:16:3e:a9:96:a6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '2bd8bf21-1f6b-42c9-9656-9a72fa8dcbf3'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '31e9639e-ea7e-41f5-8bd3-2f0344062f99', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fca25ca2f317463c909e30c8b2c188d1', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'a3264ad7-df47-48af-b997-2022c89ca53a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a2d5ec5-dae5-4df8-b843-efe172a6e533, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=75dea49d-ac7b-45ed-8521-44a081d00648) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:43:41 compute-0 kernel: tap75dea49d-ac (unregistering): left promiscuous mode
Oct 09 16:43:41 compute-0 virtnodedevd[117673]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 09 16:43:41 compute-0 virtnodedevd[117673]: hostname: compute-0
Oct 09 16:43:41 compute-0 virtnodedevd[117673]: ethtool ioctl error on tap75dea49d-ac: No such device
Oct 09 16:43:41 compute-0 podman[155168]: 2025-10-09 16:43:41.476260583 +0000 UTC m=+0.041072816 container died e82e36b6d13e903c7a63064a3fbcc5731eb7e1705a6a1ef6aa0e714bf6069afa (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Oct 09 16:43:41 compute-0 ovn_controller[19752]: 2025-10-09T16:43:41Z|00300|binding|INFO|Setting lport 75dea49d-ac7b-45ed-8521-44a081d00648 ovn-installed in OVS
Oct 09 16:43:41 compute-0 ovn_controller[19752]: 2025-10-09T16:43:41Z|00301|binding|INFO|Setting lport 75dea49d-ac7b-45ed-8521-44a081d00648 up in Southbound
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:41 compute-0 virtnodedevd[117673]: ethtool ioctl error on tap75dea49d-ac: No such device
Oct 09 16:43:41 compute-0 ovn_controller[19752]: 2025-10-09T16:43:41Z|00302|binding|INFO|Releasing lport 75dea49d-ac7b-45ed-8521-44a081d00648 from this chassis (sb_readonly=0)
Oct 09 16:43:41 compute-0 ovn_controller[19752]: 2025-10-09T16:43:41Z|00303|binding|INFO|Setting lport 75dea49d-ac7b-45ed-8521-44a081d00648 down in Southbound
Oct 09 16:43:41 compute-0 ovn_controller[19752]: 2025-10-09T16:43:41Z|00304|binding|INFO|Removing iface tap75dea49d-ac ovn-installed in OVS
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:41 compute-0 virtnodedevd[117673]: ethtool ioctl error on tap75dea49d-ac: No such device
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.491 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:96:a6 10.100.0.9'], port_security=['fa:16:3e:a9:96:a6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '2bd8bf21-1f6b-42c9-9656-9a72fa8dcbf3'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '31e9639e-ea7e-41f5-8bd3-2f0344062f99', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fca25ca2f317463c909e30c8b2c188d1', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'a3264ad7-df47-48af-b997-2022c89ca53a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a2d5ec5-dae5-4df8-b843-efe172a6e533, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=75dea49d-ac7b-45ed-8521-44a081d00648) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:43:41 compute-0 virtnodedevd[117673]: ethtool ioctl error on tap75dea49d-ac: No such device
Oct 09 16:43:41 compute-0 virtnodedevd[117673]: ethtool ioctl error on tap75dea49d-ac: No such device
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:41 compute-0 virtnodedevd[117673]: ethtool ioctl error on tap75dea49d-ac: No such device
Oct 09 16:43:41 compute-0 virtnodedevd[117673]: ethtool ioctl error on tap75dea49d-ac: No such device
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.516 2 DEBUG nova.virt.libvirt.guest [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.516 2 INFO nova.virt.libvirt.driver [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Migration operation has completed
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.517 2 INFO nova.compute.manager [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] _post_live_migration() is started..
Oct 09 16:43:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e82e36b6d13e903c7a63064a3fbcc5731eb7e1705a6a1ef6aa0e714bf6069afa-userdata-shm.mount: Deactivated successfully.
Oct 09 16:43:41 compute-0 virtnodedevd[117673]: ethtool ioctl error on tap75dea49d-ac: No such device
Oct 09 16:43:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-53bdfca7c784173200ba835c343cae764c285f15e713fac7f44d9e5bfca32170-merged.mount: Deactivated successfully.
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.520 2 DEBUG nova.virt.libvirt.driver [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.520 2 DEBUG nova.virt.libvirt.driver [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.521 2 DEBUG nova.virt.libvirt.driver [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.530 2 DEBUG nova.compute.manager [req-58c4eb65-1cae-44b0-9c3f-83510c2ac503 req-c97fe5d3-8961-48ef-98d0-b6f300087512 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received event network-vif-unplugged-75dea49d-ac7b-45ed-8521-44a081d00648 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.530 2 DEBUG oslo_concurrency.lockutils [req-58c4eb65-1cae-44b0-9c3f-83510c2ac503 req-c97fe5d3-8961-48ef-98d0-b6f300087512 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.530 2 DEBUG oslo_concurrency.lockutils [req-58c4eb65-1cae-44b0-9c3f-83510c2ac503 req-c97fe5d3-8961-48ef-98d0-b6f300087512 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.531 2 DEBUG oslo_concurrency.lockutils [req-58c4eb65-1cae-44b0-9c3f-83510c2ac503 req-c97fe5d3-8961-48ef-98d0-b6f300087512 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.531 2 DEBUG nova.compute.manager [req-58c4eb65-1cae-44b0-9c3f-83510c2ac503 req-c97fe5d3-8961-48ef-98d0-b6f300087512 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] No waiting events found dispatching network-vif-unplugged-75dea49d-ac7b-45ed-8521-44a081d00648 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.531 2 DEBUG nova.compute.manager [req-58c4eb65-1cae-44b0-9c3f-83510c2ac503 req-c97fe5d3-8961-48ef-98d0-b6f300087512 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received event network-vif-unplugged-75dea49d-ac7b-45ed-8521-44a081d00648 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.531 2 WARNING neutronclient.v2_0.client [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.532 2 WARNING neutronclient.v2_0.client [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:43:41 compute-0 podman[155168]: 2025-10-09 16:43:41.534234705 +0000 UTC m=+0.099046878 container cleanup e82e36b6d13e903c7a63064a3fbcc5731eb7e1705a6a1ef6aa0e714bf6069afa (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Oct 09 16:43:41 compute-0 systemd[1]: libpod-conmon-e82e36b6d13e903c7a63064a3fbcc5731eb7e1705a6a1ef6aa0e714bf6069afa.scope: Deactivated successfully.
Oct 09 16:43:41 compute-0 podman[155170]: 2025-10-09 16:43:41.554732417 +0000 UTC m=+0.112097953 container remove e82e36b6d13e903c7a63064a3fbcc5731eb7e1705a6a1ef6aa0e714bf6069afa (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.578 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[c8186996-16de-4540-956c-26d2aa8ceeb5]: (4, ("Thu Oct  9 04:43:41 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc (e82e36b6d13e903c7a63064a3fbcc5731eb7e1705a6a1ef6aa0e714bf6069afa)\ne82e36b6d13e903c7a63064a3fbcc5731eb7e1705a6a1ef6aa0e714bf6069afa\nThu Oct  9 04:43:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc (e82e36b6d13e903c7a63064a3fbcc5731eb7e1705a6a1ef6aa0e714bf6069afa)\ne82e36b6d13e903c7a63064a3fbcc5731eb7e1705a6a1ef6aa0e714bf6069afa\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.579 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[5320dc46-5a41-40a4-b5bc-00da053e7f12]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.580 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/faa2f899-e3f1-48c6-ac37-859a6fb5c6cc.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.581 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a495ace8-3733-4f55-a95a-2a7ea55a4feb]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.582 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaa2f899-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:41 compute-0 kernel: tapfaa2f899-e0: left promiscuous mode
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.610 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[f74e9509-8552-4cb7-bdc9-fb15728a1ba3]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.638 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[740fc61b-644c-4eb9-9ce9-46997cc0ab1d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.639 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[df3eb8e8-494e-4573-a647-b26bfaa1c2ba]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.664 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[b7b675c5-9df7-4170-89d2-8ce8a2f2e7fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 311150, 'reachable_time': 18754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 155223, 'error': None, 'target': 'ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.668 28727 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-faa2f899-e3f1-48c6-ac37-859a6fb5c6cc deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.668 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[594f7e76-f09c-4fbf-8713-3512a61cf5d6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:43:41 compute-0 systemd[1]: run-netns-ovnmeta\x2dfaa2f899\x2de3f1\x2d48c6\x2dac37\x2d859a6fb5c6cc.mount: Deactivated successfully.
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.669 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 75dea49d-ac7b-45ed-8521-44a081d00648 in datapath faa2f899-e3f1-48c6-ac37-859a6fb5c6cc unbound from our chassis
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.671 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network faa2f899-e3f1-48c6-ac37-859a6fb5c6cc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.672 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[ed2039a4-4831-4a2c-bf7b-fc41aa6c4fca]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.672 28613 INFO neutron.agent.ovn.metadata.agent [-] Port 75dea49d-ac7b-45ed-8521-44a081d00648 in datapath faa2f899-e3f1-48c6-ac37-859a6fb5c6cc unbound from our chassis
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.674 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network faa2f899-e3f1-48c6-ac37-859a6fb5c6cc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:43:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:43:41.675 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[078e4188-8a63-41ca-881d-f1b8e6a10395]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.944 2 DEBUG nova.compute.manager [req-275d9d8b-3ae3-4453-b744-d01ad05141bc req-f1b1da36-0257-4283-aae1-488f5eb7ba4d ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received event network-vif-unplugged-75dea49d-ac7b-45ed-8521-44a081d00648 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.945 2 DEBUG oslo_concurrency.lockutils [req-275d9d8b-3ae3-4453-b744-d01ad05141bc req-f1b1da36-0257-4283-aae1-488f5eb7ba4d ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.945 2 DEBUG oslo_concurrency.lockutils [req-275d9d8b-3ae3-4453-b744-d01ad05141bc req-f1b1da36-0257-4283-aae1-488f5eb7ba4d ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.946 2 DEBUG oslo_concurrency.lockutils [req-275d9d8b-3ae3-4453-b744-d01ad05141bc req-f1b1da36-0257-4283-aae1-488f5eb7ba4d ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.946 2 DEBUG nova.compute.manager [req-275d9d8b-3ae3-4453-b744-d01ad05141bc req-f1b1da36-0257-4283-aae1-488f5eb7ba4d ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] No waiting events found dispatching network-vif-unplugged-75dea49d-ac7b-45ed-8521-44a081d00648 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:43:41 compute-0 nova_compute[117331]: 2025-10-09 16:43:41.947 2 DEBUG nova.compute.manager [req-275d9d8b-3ae3-4453-b744-d01ad05141bc req-f1b1da36-0257-4283-aae1-488f5eb7ba4d ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received event network-vif-unplugged-75dea49d-ac7b-45ed-8521-44a081d00648 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.022 2 DEBUG nova.network.neutron [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Activated binding for port 75dea49d-ac7b-45ed-8521-44a081d00648 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.023 2 DEBUG nova.compute.manager [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "75dea49d-ac7b-45ed-8521-44a081d00648", "address": "fa:16:3e:a9:96:a6", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75dea49d-ac", "ovs_interfaceid": "75dea49d-ac7b-45ed-8521-44a081d00648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.024 2 DEBUG nova.virt.libvirt.vif [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:42:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-1385446209',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-1385446209',id=32,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:42:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fca25ca2f317463c909e30c8b2c188d1',ramdisk_id='',reservation_id='r-a21ifu1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,manager,admin,member',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-1067041161',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-1067041161-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:43:18Z,user_data=None,user_id='05384d72bc894b20b1cc128bf76382b2',uuid=31e9639e-ea7e-41f5-8bd3-2f0344062f99,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "75dea49d-ac7b-45ed-8521-44a081d00648", "address": "fa:16:3e:a9:96:a6", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75dea49d-ac", "ovs_interfaceid": "75dea49d-ac7b-45ed-8521-44a081d00648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.024 2 DEBUG nova.network.os_vif_util [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converting VIF {"id": "75dea49d-ac7b-45ed-8521-44a081d00648", "address": "fa:16:3e:a9:96:a6", "network": {"id": "faa2f899-e3f1-48c6-ac37-859a6fb5c6cc", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-926478805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "655616b8f80249ceab702bd3d943237d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75dea49d-ac", "ovs_interfaceid": "75dea49d-ac7b-45ed-8521-44a081d00648", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.025 2 DEBUG nova.network.os_vif_util [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:96:a6,bridge_name='br-int',has_traffic_filtering=True,id=75dea49d-ac7b-45ed-8521-44a081d00648,network=Network(faa2f899-e3f1-48c6-ac37-859a6fb5c6cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75dea49d-ac') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.025 2 DEBUG os_vif [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:96:a6,bridge_name='br-int',has_traffic_filtering=True,id=75dea49d-ac7b-45ed-8521-44a081d00648,network=Network(faa2f899-e3f1-48c6-ac37-859a6fb5c6cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75dea49d-ac') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.027 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75dea49d-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.032 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=9da3cdba-ee13-497b-a87c-364d229d1ec5) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.036 2 INFO os_vif [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:96:a6,bridge_name='br-int',has_traffic_filtering=True,id=75dea49d-ac7b-45ed-8521-44a081d00648,network=Network(faa2f899-e3f1-48c6-ac37-859a6fb5c6cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75dea49d-ac')
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.037 2 DEBUG oslo_concurrency.lockutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.037 2 DEBUG oslo_concurrency.lockutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.037 2 DEBUG oslo_concurrency.lockutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.037 2 DEBUG nova.compute.manager [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.038 2 INFO nova.virt.libvirt.driver [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Deleting instance files /var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99_del
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.039 2 INFO nova.virt.libvirt.driver [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Deletion of /var/lib/nova/instances/31e9639e-ea7e-41f5-8bd3-2f0344062f99_del complete
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:43:42 compute-0 nova_compute[117331]: 2025-10-09 16:43:42.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:42 compute-0 sshd-session[155088]: Received disconnect from 193.46.255.217 port 17022:11:  [preauth]
Oct 09 16:43:42 compute-0 sshd-session[155088]: Disconnected from authenticating user root 193.46.255.217 port 17022 [preauth]
Oct 09 16:43:42 compute-0 sshd-session[155088]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.217  user=root
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.598 2 DEBUG nova.compute.manager [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received event network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.598 2 DEBUG oslo_concurrency.lockutils [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.599 2 DEBUG oslo_concurrency.lockutils [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.599 2 DEBUG oslo_concurrency.lockutils [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.600 2 DEBUG nova.compute.manager [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] No waiting events found dispatching network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.600 2 WARNING nova.compute.manager [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received unexpected event network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 for instance with vm_state active and task_state migrating.
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.601 2 DEBUG nova.compute.manager [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received event network-vif-unplugged-75dea49d-ac7b-45ed-8521-44a081d00648 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.601 2 DEBUG oslo_concurrency.lockutils [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.602 2 DEBUG oslo_concurrency.lockutils [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.602 2 DEBUG oslo_concurrency.lockutils [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.602 2 DEBUG nova.compute.manager [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] No waiting events found dispatching network-vif-unplugged-75dea49d-ac7b-45ed-8521-44a081d00648 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.603 2 DEBUG nova.compute.manager [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received event network-vif-unplugged-75dea49d-ac7b-45ed-8521-44a081d00648 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.603 2 DEBUG nova.compute.manager [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received event network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.604 2 DEBUG oslo_concurrency.lockutils [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.604 2 DEBUG oslo_concurrency.lockutils [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.604 2 DEBUG oslo_concurrency.lockutils [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.605 2 DEBUG nova.compute.manager [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] No waiting events found dispatching network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.605 2 WARNING nova.compute.manager [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received unexpected event network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 for instance with vm_state active and task_state migrating.
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.606 2 DEBUG nova.compute.manager [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received event network-vif-unplugged-75dea49d-ac7b-45ed-8521-44a081d00648 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.606 2 DEBUG oslo_concurrency.lockutils [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.606 2 DEBUG oslo_concurrency.lockutils [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.607 2 DEBUG oslo_concurrency.lockutils [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.607 2 DEBUG nova.compute.manager [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] No waiting events found dispatching network-vif-unplugged-75dea49d-ac7b-45ed-8521-44a081d00648 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.608 2 DEBUG nova.compute.manager [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received event network-vif-unplugged-75dea49d-ac7b-45ed-8521-44a081d00648 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.608 2 DEBUG nova.compute.manager [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received event network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.609 2 DEBUG oslo_concurrency.lockutils [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.609 2 DEBUG oslo_concurrency.lockutils [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.610 2 DEBUG oslo_concurrency.lockutils [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.610 2 DEBUG nova.compute.manager [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] No waiting events found dispatching network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.610 2 WARNING nova.compute.manager [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received unexpected event network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 for instance with vm_state active and task_state migrating.
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.611 2 DEBUG nova.compute.manager [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received event network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.611 2 DEBUG oslo_concurrency.lockutils [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.612 2 DEBUG oslo_concurrency.lockutils [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.612 2 DEBUG oslo_concurrency.lockutils [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.613 2 DEBUG nova.compute.manager [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] No waiting events found dispatching network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.613 2 WARNING nova.compute.manager [req-a42939a3-bcab-4f91-a726-8f649afbbe38 req-329844d6-5387-435b-b01c-647dcf64f7a8 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Received unexpected event network-vif-plugged-75dea49d-ac7b-45ed-8521-44a081d00648 for instance with vm_state active and task_state migrating.
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.821 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.821 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.821 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:43:43 compute-0 podman[155224]: 2025-10-09 16:43:43.827528029 +0000 UTC m=+0.058280073 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.980 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.981 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.998 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.017s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.999 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6129MB free_disk=73.24531555175781GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.999 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:43:43 compute-0 nova_compute[117331]: 2025-10-09 16:43:43.999 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:43:45 compute-0 nova_compute[117331]: 2025-10-09 16:43:45.020 2 INFO nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Updating resource usage from migration 7e258fbc-dd52-4649-b774-c9432413d429
Oct 09 16:43:45 compute-0 nova_compute[117331]: 2025-10-09 16:43:45.104 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Migration 7e258fbc-dd52-4649-b774-c9432413d429 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:43:45 compute-0 nova_compute[117331]: 2025-10-09 16:43:45.105 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:43:45 compute-0 nova_compute[117331]: 2025-10-09 16:43:45.105 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:43:43 up 52 min,  0 user,  load average: 0.32, 0.41, 0.44\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_migrating': '1', 'num_os_type_None': '1', 'num_proj_fca25ca2f317463c909e30c8b2c188d1': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:43:45 compute-0 nova_compute[117331]: 2025-10-09 16:43:45.144 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:43:45 compute-0 nova_compute[117331]: 2025-10-09 16:43:45.654 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:43:46 compute-0 nova_compute[117331]: 2025-10-09 16:43:46.167 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:43:46 compute-0 nova_compute[117331]: 2025-10-09 16:43:46.168 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.168s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:43:47 compute-0 nova_compute[117331]: 2025-10-09 16:43:47.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:47 compute-0 nova_compute[117331]: 2025-10-09 16:43:47.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:49 compute-0 podman[155251]: 2025-10-09 16:43:49.864993991 +0000 UTC m=+0.080914912 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 09 16:43:49 compute-0 podman[155252]: 2025-10-09 16:43:49.874759892 +0000 UTC m=+0.092076857 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 09 16:43:50 compute-0 nova_compute[117331]: 2025-10-09 16:43:50.168 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:43:50 compute-0 nova_compute[117331]: 2025-10-09 16:43:50.169 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:43:52 compute-0 nova_compute[117331]: 2025-10-09 16:43:52.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:52 compute-0 nova_compute[117331]: 2025-10-09 16:43:52.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:52 compute-0 nova_compute[117331]: 2025-10-09 16:43:52.576 2 DEBUG oslo_concurrency.lockutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:43:52 compute-0 nova_compute[117331]: 2025-10-09 16:43:52.576 2 DEBUG oslo_concurrency.lockutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:43:52 compute-0 nova_compute[117331]: 2025-10-09 16:43:52.577 2 DEBUG oslo_concurrency.lockutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "31e9639e-ea7e-41f5-8bd3-2f0344062f99-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:43:53 compute-0 nova_compute[117331]: 2025-10-09 16:43:53.088 2 DEBUG oslo_concurrency.lockutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:43:53 compute-0 nova_compute[117331]: 2025-10-09 16:43:53.089 2 DEBUG oslo_concurrency.lockutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:43:53 compute-0 nova_compute[117331]: 2025-10-09 16:43:53.089 2 DEBUG oslo_concurrency.lockutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:43:53 compute-0 nova_compute[117331]: 2025-10-09 16:43:53.090 2 DEBUG nova.compute.resource_tracker [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:43:53 compute-0 nova_compute[117331]: 2025-10-09 16:43:53.247 2 WARNING nova.virt.libvirt.driver [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:43:53 compute-0 nova_compute[117331]: 2025-10-09 16:43:53.248 2 DEBUG oslo_concurrency.processutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:43:53 compute-0 nova_compute[117331]: 2025-10-09 16:43:53.270 2 DEBUG oslo_concurrency.processutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.022s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:43:53 compute-0 nova_compute[117331]: 2025-10-09 16:43:53.271 2 DEBUG nova.compute.resource_tracker [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6146MB free_disk=73.24538040161133GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:43:53 compute-0 nova_compute[117331]: 2025-10-09 16:43:53.272 2 DEBUG oslo_concurrency.lockutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:43:53 compute-0 nova_compute[117331]: 2025-10-09 16:43:53.272 2 DEBUG oslo_concurrency.lockutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:43:54 compute-0 nova_compute[117331]: 2025-10-09 16:43:54.295 2 DEBUG nova.compute.resource_tracker [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration for instance 31e9639e-ea7e-41f5-8bd3-2f0344062f99 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 09 16:43:54 compute-0 nova_compute[117331]: 2025-10-09 16:43:54.804 2 DEBUG nova.compute.resource_tracker [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Oct 09 16:43:54 compute-0 nova_compute[117331]: 2025-10-09 16:43:54.830 2 DEBUG nova.compute.resource_tracker [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Migration 7e258fbc-dd52-4649-b774-c9432413d429 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 09 16:43:54 compute-0 nova_compute[117331]: 2025-10-09 16:43:54.831 2 DEBUG nova.compute.resource_tracker [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:43:54 compute-0 nova_compute[117331]: 2025-10-09 16:43:54.831 2 DEBUG nova.compute.resource_tracker [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:43:53 up 52 min,  0 user,  load average: 0.27, 0.40, 0.43\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:43:54 compute-0 nova_compute[117331]: 2025-10-09 16:43:54.882 2 DEBUG nova.compute.provider_tree [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:43:55 compute-0 nova_compute[117331]: 2025-10-09 16:43:55.389 2 DEBUG nova.scheduler.client.report [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:43:55 compute-0 nova_compute[117331]: 2025-10-09 16:43:55.899 2 DEBUG nova.compute.resource_tracker [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:43:55 compute-0 nova_compute[117331]: 2025-10-09 16:43:55.900 2 DEBUG oslo_concurrency.lockutils [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.627s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:43:55 compute-0 nova_compute[117331]: 2025-10-09 16:43:55.921 2 INFO nova.compute.manager [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Oct 09 16:43:57 compute-0 nova_compute[117331]: 2025-10-09 16:43:57.004 2 INFO nova.scheduler.client.report [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] Deleted allocation for migration 7e258fbc-dd52-4649-b774-c9432413d429
Oct 09 16:43:57 compute-0 nova_compute[117331]: 2025-10-09 16:43:57.005 2 DEBUG nova.virt.libvirt.driver [None req-ceed530a-e290-4a44-afa3-258d274da1ed 005258eefa5345409efe13f6a1de22ab c076c42271004e7aa86e84416e33f826 - - default default] [instance: 31e9639e-ea7e-41f5-8bd3-2f0344062f99] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Oct 09 16:43:57 compute-0 nova_compute[117331]: 2025-10-09 16:43:57.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:57 compute-0 nova_compute[117331]: 2025-10-09 16:43:57.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:43:58 compute-0 nova_compute[117331]: 2025-10-09 16:43:58.657 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:43:59 compute-0 podman[127775]: time="2025-10-09T16:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:43:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:43:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3037 "" "Go-http-client/1.1"
Oct 09 16:43:59 compute-0 podman[155291]: 2025-10-09 16:43:59.863929737 +0000 UTC m=+0.079287241 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6)
Oct 09 16:43:59 compute-0 podman[155292]: 2025-10-09 16:43:59.903223175 +0000 UTC m=+0.113341023 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=watcher_latest, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Oct 09 16:44:01 compute-0 openstack_network_exporter[129925]: ERROR   16:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:44:01 compute-0 openstack_network_exporter[129925]: ERROR   16:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:44:01 compute-0 openstack_network_exporter[129925]: ERROR   16:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:44:01 compute-0 openstack_network_exporter[129925]: ERROR   16:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:44:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:44:01 compute-0 openstack_network_exporter[129925]: ERROR   16:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:44:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:44:02 compute-0 nova_compute[117331]: 2025-10-09 16:44:02.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:02 compute-0 nova_compute[117331]: 2025-10-09 16:44:02.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:03 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:44:03.682 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:44:03 compute-0 nova_compute[117331]: 2025-10-09 16:44:03.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:03 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:44:03.684 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:44:07 compute-0 nova_compute[117331]: 2025-10-09 16:44:07.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:07 compute-0 nova_compute[117331]: 2025-10-09 16:44:07.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:07 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:44:07.685 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:44:10 compute-0 podman[155340]: 2025-10-09 16:44:10.89384782 +0000 UTC m=+0.121615932 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 09 16:44:12 compute-0 nova_compute[117331]: 2025-10-09 16:44:12.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:12 compute-0 nova_compute[117331]: 2025-10-09 16:44:12.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:14 compute-0 sshd-session[155338]: Invalid user admin from 124.60.67.43 port 50350
Oct 09 16:44:14 compute-0 podman[155360]: 2025-10-09 16:44:14.856648771 +0000 UTC m=+0.087774213 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 09 16:44:15 compute-0 sshd-session[155338]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:44:15 compute-0 sshd-session[155338]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43
Oct 09 16:44:17 compute-0 nova_compute[117331]: 2025-10-09 16:44:17.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:17 compute-0 nova_compute[117331]: 2025-10-09 16:44:17.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:17 compute-0 sshd-session[155338]: Failed password for invalid user admin from 124.60.67.43 port 50350 ssh2
Oct 09 16:44:20 compute-0 podman[155386]: 2025-10-09 16:44:20.84646837 +0000 UTC m=+0.067554203 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0)
Oct 09 16:44:20 compute-0 podman[155385]: 2025-10-09 16:44:20.846331356 +0000 UTC m=+0.071841951 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:44:21 compute-0 sshd-session[155338]: Connection closed by invalid user admin 124.60.67.43 port 50350 [preauth]
Oct 09 16:44:22 compute-0 nova_compute[117331]: 2025-10-09 16:44:22.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:22 compute-0 nova_compute[117331]: 2025-10-09 16:44:22.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:26 compute-0 nova_compute[117331]: 2025-10-09 16:44:26.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:27 compute-0 nova_compute[117331]: 2025-10-09 16:44:27.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:27 compute-0 nova_compute[117331]: 2025-10-09 16:44:27.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:29 compute-0 podman[127775]: time="2025-10-09T16:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:44:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:44:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3030 "" "Go-http-client/1.1"
Oct 09 16:44:30 compute-0 podman[155426]: 2025-10-09 16:44:30.843235409 +0000 UTC m=+0.074488336 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.expose-services=, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 09 16:44:30 compute-0 podman[155427]: 2025-10-09 16:44:30.900290704 +0000 UTC m=+0.117509980 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4)
Oct 09 16:44:31 compute-0 openstack_network_exporter[129925]: ERROR   16:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:44:31 compute-0 openstack_network_exporter[129925]: ERROR   16:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:44:31 compute-0 openstack_network_exporter[129925]: ERROR   16:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:44:31 compute-0 openstack_network_exporter[129925]: ERROR   16:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:44:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:44:31 compute-0 openstack_network_exporter[129925]: ERROR   16:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:44:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:44:32 compute-0 nova_compute[117331]: 2025-10-09 16:44:32.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:32 compute-0 nova_compute[117331]: 2025-10-09 16:44:32.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:44:35.350 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:44:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:44:35.350 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:44:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:44:35.350 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:44:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:44:35.544 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:0b:60 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052ac062-eaae-4bf0-a40e-d100eb37efd8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35a62963f2914ca4a7e5f2a6b9a36e93', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d539eb18-5c5b-45b9-86bd-feaec6a9254a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=6594e9c2-70fd-4a14-afe1-fd5006870857) old=Port_Binding(mac=['fa:16:3e:81:0b:60'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052ac062-eaae-4bf0-a40e-d100eb37efd8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35a62963f2914ca4a7e5f2a6b9a36e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:44:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:44:35.546 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 6594e9c2-70fd-4a14-afe1-fd5006870857 in datapath 052ac062-eaae-4bf0-a40e-d100eb37efd8 updated
Oct 09 16:44:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:44:35.547 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 052ac062-eaae-4bf0-a40e-d100eb37efd8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:44:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:44:35.549 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d8bf5339-baf2-436d-941c-32ff95095512]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:44:36 compute-0 nova_compute[117331]: 2025-10-09 16:44:36.816 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:44:37 compute-0 nova_compute[117331]: 2025-10-09 16:44:37.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:37 compute-0 nova_compute[117331]: 2025-10-09 16:44:37.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:38 compute-0 nova_compute[117331]: 2025-10-09 16:44:38.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:44:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:44:41.706 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:21:f5 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d6111d54-cd54-4140-9f2f-a3a07bc0698c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6111d54-cd54-4140-9f2f-a3a07bc0698c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbc5b562dfbf46d888285482e4fe52bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19692532-ef61-4a13-aefe-26b540afb6e9, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=13d5f326-576a-4145-8e43-0b6b9681dab8) old=Port_Binding(mac=['fa:16:3e:bb:21:f5'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-d6111d54-cd54-4140-9f2f-a3a07bc0698c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6111d54-cd54-4140-9f2f-a3a07bc0698c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbc5b562dfbf46d888285482e4fe52bb', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:44:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:44:41.707 28613 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 13d5f326-576a-4145-8e43-0b6b9681dab8 in datapath d6111d54-cd54-4140-9f2f-a3a07bc0698c updated
Oct 09 16:44:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:44:41.709 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d6111d54-cd54-4140-9f2f-a3a07bc0698c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:44:41 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:44:41.709 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[afa6d5b4-235d-4d7e-8ca1-bebe8e60bffb]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:44:41 compute-0 podman[155473]: 2025-10-09 16:44:41.861460238 +0000 UTC m=+0.086874715 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_id=multipathd, io.buildah.version=1.41.4, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251007)
Oct 09 16:44:42 compute-0 nova_compute[117331]: 2025-10-09 16:44:42.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:42 compute-0 nova_compute[117331]: 2025-10-09 16:44:42.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:44:42 compute-0 nova_compute[117331]: 2025-10-09 16:44:42.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:43 compute-0 nova_compute[117331]: 2025-10-09 16:44:43.302 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:44:43 compute-0 nova_compute[117331]: 2025-10-09 16:44:43.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:44:43 compute-0 nova_compute[117331]: 2025-10-09 16:44:43.306 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:44:43 compute-0 nova_compute[117331]: 2025-10-09 16:44:43.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:44:43 compute-0 nova_compute[117331]: 2025-10-09 16:44:43.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:44:43 compute-0 nova_compute[117331]: 2025-10-09 16:44:43.821 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:44:43 compute-0 nova_compute[117331]: 2025-10-09 16:44:43.821 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:44:43 compute-0 nova_compute[117331]: 2025-10-09 16:44:43.821 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:44:44 compute-0 nova_compute[117331]: 2025-10-09 16:44:44.000 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:44:44 compute-0 nova_compute[117331]: 2025-10-09 16:44:44.002 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:44:44 compute-0 nova_compute[117331]: 2025-10-09 16:44:44.037 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.036s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:44:44 compute-0 nova_compute[117331]: 2025-10-09 16:44:44.038 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6179MB free_disk=73.24538040161133GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:44:44 compute-0 nova_compute[117331]: 2025-10-09 16:44:44.038 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:44:44 compute-0 nova_compute[117331]: 2025-10-09 16:44:44.039 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:44:45 compute-0 nova_compute[117331]: 2025-10-09 16:44:45.126 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:44:45 compute-0 nova_compute[117331]: 2025-10-09 16:44:45.126 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:44:44 up 53 min,  0 user,  load average: 0.16, 0.35, 0.42\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:44:45 compute-0 nova_compute[117331]: 2025-10-09 16:44:45.163 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:44:45 compute-0 nova_compute[117331]: 2025-10-09 16:44:45.673 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:44:45 compute-0 podman[155494]: 2025-10-09 16:44:45.848568221 +0000 UTC m=+0.073886957 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:44:46 compute-0 nova_compute[117331]: 2025-10-09 16:44:46.183 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:44:46 compute-0 nova_compute[117331]: 2025-10-09 16:44:46.183 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.145s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:44:47 compute-0 nova_compute[117331]: 2025-10-09 16:44:47.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:47 compute-0 nova_compute[117331]: 2025-10-09 16:44:47.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:49 compute-0 sshd-session[155424]: Invalid user pi from 124.60.67.43 port 40208
Oct 09 16:44:50 compute-0 sshd-session[155424]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:44:50 compute-0 sshd-session[155424]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43
Oct 09 16:44:50 compute-0 podman[155518]: 2025-10-09 16:44:50.999603285 +0000 UTC m=+0.066941823 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:44:51 compute-0 podman[155519]: 2025-10-09 16:44:51.0174617 +0000 UTC m=+0.074789366 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, config_id=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=iscsid)
Oct 09 16:44:51 compute-0 nova_compute[117331]: 2025-10-09 16:44:51.183 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:44:51 compute-0 nova_compute[117331]: 2025-10-09 16:44:51.719 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:44:51 compute-0 nova_compute[117331]: 2025-10-09 16:44:51.720 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:44:52 compute-0 nova_compute[117331]: 2025-10-09 16:44:52.083 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Acquiring lock "d9cb0583-a454-4df1-80f9-dc9f184101f2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:44:52 compute-0 nova_compute[117331]: 2025-10-09 16:44:52.084 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:44:52 compute-0 nova_compute[117331]: 2025-10-09 16:44:52.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:52 compute-0 nova_compute[117331]: 2025-10-09 16:44:52.589 2 DEBUG nova.compute.manager [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 09 16:44:52 compute-0 nova_compute[117331]: 2025-10-09 16:44:52.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:53 compute-0 nova_compute[117331]: 2025-10-09 16:44:53.162 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:44:53 compute-0 nova_compute[117331]: 2025-10-09 16:44:53.163 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:44:53 compute-0 nova_compute[117331]: 2025-10-09 16:44:53.180 2 DEBUG nova.virt.hardware [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 09 16:44:53 compute-0 nova_compute[117331]: 2025-10-09 16:44:53.181 2 INFO nova.compute.claims [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Claim successful on node compute-0.ctlplane.example.com
Oct 09 16:44:53 compute-0 sshd-session[155424]: Failed password for invalid user pi from 124.60.67.43 port 40208 ssh2
Oct 09 16:44:54 compute-0 nova_compute[117331]: 2025-10-09 16:44:54.236 2 DEBUG nova.compute.provider_tree [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:44:54 compute-0 nova_compute[117331]: 2025-10-09 16:44:54.742 2 DEBUG nova.scheduler.client.report [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:44:55 compute-0 nova_compute[117331]: 2025-10-09 16:44:55.253 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.091s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:44:55 compute-0 nova_compute[117331]: 2025-10-09 16:44:55.255 2 DEBUG nova.compute.manager [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 09 16:44:55 compute-0 sshd-session[155424]: Connection closed by invalid user pi 124.60.67.43 port 40208 [preauth]
Oct 09 16:44:55 compute-0 nova_compute[117331]: 2025-10-09 16:44:55.773 2 DEBUG nova.compute.manager [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 09 16:44:55 compute-0 nova_compute[117331]: 2025-10-09 16:44:55.774 2 DEBUG nova.network.neutron [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 09 16:44:55 compute-0 nova_compute[117331]: 2025-10-09 16:44:55.774 2 WARNING neutronclient.v2_0.client [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:44:55 compute-0 nova_compute[117331]: 2025-10-09 16:44:55.775 2 WARNING neutronclient.v2_0.client [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:44:56 compute-0 nova_compute[117331]: 2025-10-09 16:44:56.284 2 INFO nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 16:44:56 compute-0 nova_compute[117331]: 2025-10-09 16:44:56.794 2 DEBUG nova.compute.manager [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 09 16:44:57 compute-0 nova_compute[117331]: 2025-10-09 16:44:57.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:57 compute-0 nova_compute[117331]: 2025-10-09 16:44:57.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:44:57 compute-0 nova_compute[117331]: 2025-10-09 16:44:57.812 2 DEBUG nova.compute.manager [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 09 16:44:57 compute-0 nova_compute[117331]: 2025-10-09 16:44:57.814 2 DEBUG nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 09 16:44:57 compute-0 nova_compute[117331]: 2025-10-09 16:44:57.814 2 INFO nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Creating image(s)
Oct 09 16:44:57 compute-0 nova_compute[117331]: 2025-10-09 16:44:57.815 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Acquiring lock "/var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:44:57 compute-0 nova_compute[117331]: 2025-10-09 16:44:57.815 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "/var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:44:57 compute-0 nova_compute[117331]: 2025-10-09 16:44:57.816 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "/var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:44:57 compute-0 nova_compute[117331]: 2025-10-09 16:44:57.816 2 DEBUG oslo_utils.imageutils.format_inspector [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:44:57 compute-0 nova_compute[117331]: 2025-10-09 16:44:57.819 2 DEBUG oslo_utils.imageutils.format_inspector [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:44:57 compute-0 nova_compute[117331]: 2025-10-09 16:44:57.821 2 DEBUG oslo_concurrency.processutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:44:57 compute-0 nova_compute[117331]: 2025-10-09 16:44:57.911 2 DEBUG oslo_concurrency.processutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:44:57 compute-0 nova_compute[117331]: 2025-10-09 16:44:57.912 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Acquiring lock "cea3dacfdea0f3734ae526b812744e847bc2d356" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:44:57 compute-0 nova_compute[117331]: 2025-10-09 16:44:57.913 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:44:57 compute-0 nova_compute[117331]: 2025-10-09 16:44:57.914 2 DEBUG oslo_utils.imageutils.format_inspector [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:44:57 compute-0 nova_compute[117331]: 2025-10-09 16:44:57.917 2 DEBUG oslo_utils.imageutils.format_inspector [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 09 16:44:57 compute-0 nova_compute[117331]: 2025-10-09 16:44:57.917 2 DEBUG oslo_concurrency.processutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:44:57 compute-0 nova_compute[117331]: 2025-10-09 16:44:57.992 2 DEBUG oslo_concurrency.processutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:44:57 compute-0 nova_compute[117331]: 2025-10-09 16:44:57.994 2 DEBUG oslo_concurrency.processutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:44:58 compute-0 nova_compute[117331]: 2025-10-09 16:44:58.050 2 DEBUG oslo_concurrency.processutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356,backing_fmt=raw /var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/disk 1073741824" returned: 0 in 0.056s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:44:58 compute-0 nova_compute[117331]: 2025-10-09 16:44:58.053 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "cea3dacfdea0f3734ae526b812744e847bc2d356" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.140s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:44:58 compute-0 nova_compute[117331]: 2025-10-09 16:44:58.054 2 DEBUG oslo_concurrency.processutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:44:58 compute-0 nova_compute[117331]: 2025-10-09 16:44:58.132 2 DEBUG oslo_concurrency.processutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cea3dacfdea0f3734ae526b812744e847bc2d356 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:44:58 compute-0 nova_compute[117331]: 2025-10-09 16:44:58.134 2 DEBUG nova.virt.disk.api [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Checking if we can resize image /var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 09 16:44:58 compute-0 nova_compute[117331]: 2025-10-09 16:44:58.135 2 DEBUG oslo_concurrency.processutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:44:58 compute-0 nova_compute[117331]: 2025-10-09 16:44:58.222 2 DEBUG oslo_concurrency.processutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:44:58 compute-0 nova_compute[117331]: 2025-10-09 16:44:58.223 2 DEBUG nova.virt.disk.api [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Cannot resize image /var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 09 16:44:58 compute-0 nova_compute[117331]: 2025-10-09 16:44:58.224 2 DEBUG nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 09 16:44:58 compute-0 nova_compute[117331]: 2025-10-09 16:44:58.225 2 DEBUG nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Ensure instance console log exists: /var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 09 16:44:58 compute-0 nova_compute[117331]: 2025-10-09 16:44:58.226 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:44:58 compute-0 nova_compute[117331]: 2025-10-09 16:44:58.226 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:44:58 compute-0 nova_compute[117331]: 2025-10-09 16:44:58.227 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:44:58 compute-0 nova_compute[117331]: 2025-10-09 16:44:58.541 2 DEBUG nova.network.neutron [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Successfully created port: b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 09 16:44:59 compute-0 podman[127775]: time="2025-10-09T16:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:44:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:44:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3034 "" "Go-http-client/1.1"
Oct 09 16:44:59 compute-0 nova_compute[117331]: 2025-10-09 16:44:59.815 2 DEBUG nova.network.neutron [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Successfully updated port: b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 09 16:44:59 compute-0 nova_compute[117331]: 2025-10-09 16:44:59.875 2 DEBUG nova.compute.manager [req-bac6aef1-fde4-4968-9ecb-b902b5f36348 req-31b9ac72-e8af-4846-80c6-8f16c7d34a56 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Received event network-changed-b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:44:59 compute-0 nova_compute[117331]: 2025-10-09 16:44:59.876 2 DEBUG nova.compute.manager [req-bac6aef1-fde4-4968-9ecb-b902b5f36348 req-31b9ac72-e8af-4846-80c6-8f16c7d34a56 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Refreshing instance network info cache due to event network-changed-b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 09 16:44:59 compute-0 nova_compute[117331]: 2025-10-09 16:44:59.876 2 DEBUG oslo_concurrency.lockutils [req-bac6aef1-fde4-4968-9ecb-b902b5f36348 req-31b9ac72-e8af-4846-80c6-8f16c7d34a56 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "refresh_cache-d9cb0583-a454-4df1-80f9-dc9f184101f2" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:44:59 compute-0 nova_compute[117331]: 2025-10-09 16:44:59.876 2 DEBUG oslo_concurrency.lockutils [req-bac6aef1-fde4-4968-9ecb-b902b5f36348 req-31b9ac72-e8af-4846-80c6-8f16c7d34a56 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquired lock "refresh_cache-d9cb0583-a454-4df1-80f9-dc9f184101f2" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:44:59 compute-0 nova_compute[117331]: 2025-10-09 16:44:59.877 2 DEBUG nova.network.neutron [req-bac6aef1-fde4-4968-9ecb-b902b5f36348 req-31b9ac72-e8af-4846-80c6-8f16c7d34a56 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Refreshing network info cache for port b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 09 16:45:00 compute-0 ovn_controller[19752]: 2025-10-09T16:45:00Z|00305|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 09 16:45:00 compute-0 nova_compute[117331]: 2025-10-09 16:45:00.321 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Acquiring lock "refresh_cache-d9cb0583-a454-4df1-80f9-dc9f184101f2" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 09 16:45:00 compute-0 nova_compute[117331]: 2025-10-09 16:45:00.384 2 WARNING neutronclient.v2_0.client [req-bac6aef1-fde4-4968-9ecb-b902b5f36348 req-31b9ac72-e8af-4846-80c6-8f16c7d34a56 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:45:00 compute-0 nova_compute[117331]: 2025-10-09 16:45:00.531 2 DEBUG nova.network.neutron [req-bac6aef1-fde4-4968-9ecb-b902b5f36348 req-31b9ac72-e8af-4846-80c6-8f16c7d34a56 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:45:00 compute-0 nova_compute[117331]: 2025-10-09 16:45:00.733 2 DEBUG nova.network.neutron [req-bac6aef1-fde4-4968-9ecb-b902b5f36348 req-31b9ac72-e8af-4846-80c6-8f16c7d34a56 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:45:01 compute-0 nova_compute[117331]: 2025-10-09 16:45:01.242 2 DEBUG oslo_concurrency.lockutils [req-bac6aef1-fde4-4968-9ecb-b902b5f36348 req-31b9ac72-e8af-4846-80c6-8f16c7d34a56 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Releasing lock "refresh_cache-d9cb0583-a454-4df1-80f9-dc9f184101f2" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:45:01 compute-0 nova_compute[117331]: 2025-10-09 16:45:01.243 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Acquired lock "refresh_cache-d9cb0583-a454-4df1-80f9-dc9f184101f2" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 09 16:45:01 compute-0 nova_compute[117331]: 2025-10-09 16:45:01.244 2 DEBUG nova.network.neutron [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 09 16:45:01 compute-0 podman[155575]: 2025-10-09 16:45:01.328171917 +0000 UTC m=+0.064085042 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9)
Oct 09 16:45:01 compute-0 podman[155576]: 2025-10-09 16:45:01.355455954 +0000 UTC m=+0.089303643 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:45:01 compute-0 openstack_network_exporter[129925]: ERROR   16:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:45:01 compute-0 openstack_network_exporter[129925]: ERROR   16:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:45:01 compute-0 openstack_network_exporter[129925]: ERROR   16:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:45:01 compute-0 openstack_network_exporter[129925]: ERROR   16:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:45:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:45:01 compute-0 openstack_network_exporter[129925]: ERROR   16:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:45:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:45:01 compute-0 nova_compute[117331]: 2025-10-09 16:45:01.828 2 DEBUG nova.network.neutron [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 09 16:45:02 compute-0 nova_compute[117331]: 2025-10-09 16:45:02.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:02 compute-0 nova_compute[117331]: 2025-10-09 16:45:02.516 2 WARNING neutronclient.v2_0.client [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:45:02 compute-0 nova_compute[117331]: 2025-10-09 16:45:02.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:02 compute-0 nova_compute[117331]: 2025-10-09 16:45:02.769 2 DEBUG nova.network.neutron [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Updating instance_info_cache with network_info: [{"id": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "address": "fa:16:3e:8b:c4:76", "network": {"id": "052ac062-eaae-4bf0-a40e-d100eb37efd8", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategyVolume-1366874524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a62963f2914ca4a7e5f2a6b9a36e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7eb3265-ba", "ovs_interfaceid": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.279 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Releasing lock "refresh_cache-d9cb0583-a454-4df1-80f9-dc9f184101f2" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.280 2 DEBUG nova.compute.manager [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Instance network_info: |[{"id": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "address": "fa:16:3e:8b:c4:76", "network": {"id": "052ac062-eaae-4bf0-a40e-d100eb37efd8", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategyVolume-1366874524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a62963f2914ca4a7e5f2a6b9a36e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7eb3265-ba", "ovs_interfaceid": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.283 2 DEBUG nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Start _get_guest_xml network_info=[{"id": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "address": "fa:16:3e:8b:c4:76", "network": {"id": "052ac062-eaae-4bf0-a40e-d100eb37efd8", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategyVolume-1366874524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a62963f2914ca4a7e5f2a6b9a36e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7eb3265-ba", "ovs_interfaceid": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encrypted': False, 'size': 0, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'b7d6e0af-25e4-4227-9dc6-43143898ceee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.288 2 WARNING nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.290 2 DEBUG nova.virt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='b7d6e0af-25e4-4227-9dc6-43143898ceee', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteZoneMigrationStrategyVolume-server-1045211055', uuid='d9cb0583-a454-4df1-80f9-dc9f184101f2'), owner=OwnerMeta(userid='edda4b76bea746b7aec969bc10f68f14', username='tempest-TestExecuteZoneMigrationStrategyVolume-905162571-project-admin', projectid='dbc5b562dfbf46d888285482e4fe52bb', projectname='tempest-TestExecuteZoneMigrationStrategyVolume-905162571'), image=ImageMeta(id='b7d6e0af-25e4-4227-9dc6-43143898ceee', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "address": "fa:16:3e:8b:c4:76", "network": {"id": "052ac062-eaae-4bf0-a40e-d100eb37efd8", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategyVolume-1366874524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a62963f2914ca4a7e5f2a6b9a36e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7eb3265-ba", "ovs_interfaceid": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008143712.076498e.el10', creation_time=1760028303.2905114) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.298 2 DEBUG nova.virt.libvirt.host [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.298 2 DEBUG nova.virt.libvirt.host [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.302 2 DEBUG nova.virt.libvirt.host [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.302 2 DEBUG nova.virt.libvirt.host [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.303 2 DEBUG nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.303 2 DEBUG nova.virt.hardware [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T16:07:00Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T16:07:02Z,direct_url=<?>,disk_format='qcow2',id=b7d6e0af-25e4-4227-9dc6-43143898ceee,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b30e8cf5e10742f190212b4cb97ce2c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T16:07:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.303 2 DEBUG nova.virt.hardware [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.304 2 DEBUG nova.virt.hardware [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.304 2 DEBUG nova.virt.hardware [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.304 2 DEBUG nova.virt.hardware [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.304 2 DEBUG nova.virt.hardware [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.304 2 DEBUG nova.virt.hardware [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.305 2 DEBUG nova.virt.hardware [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.305 2 DEBUG nova.virt.hardware [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.305 2 DEBUG nova.virt.hardware [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.306 2 DEBUG nova.virt.hardware [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.310 2 DEBUG nova.virt.libvirt.vif [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:44:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategyVolume-server-1045211055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategyvolume-server-104521105',id=34,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dbc5b562dfbf46d888285482e4fe52bb',ramdisk_id='',reservation_id='r-8s3dawuh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteZoneMigrationStrategyVolume-905162571',owner_user_name='tempest-TestExecuteZoneMigrationStrategyVolume-905162571-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:44:56Z,user_data=None,user_id='edda4b76bea746b7aec969bc10f68f14',uuid=d9cb0583-a454-4df1-80f9-dc9f184101f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "address": "fa:16:3e:8b:c4:76", "network": {"id": "052ac062-eaae-4bf0-a40e-d100eb37efd8", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategyVolume-1366874524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a62963f2914ca4a7e5f2a6b9a36e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7eb3265-ba", "ovs_interfaceid": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.311 2 DEBUG nova.network.os_vif_util [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Converting VIF {"id": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "address": "fa:16:3e:8b:c4:76", "network": {"id": "052ac062-eaae-4bf0-a40e-d100eb37efd8", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategyVolume-1366874524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a62963f2914ca4a7e5f2a6b9a36e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7eb3265-ba", "ovs_interfaceid": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.312 2 DEBUG nova.network.os_vif_util [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:c4:76,bridge_name='br-int',has_traffic_filtering=True,id=b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e,network=Network(052ac062-eaae-4bf0-a40e-d100eb37efd8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7eb3265-ba') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.312 2 DEBUG nova.objects.instance [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lazy-loading 'pci_devices' on Instance uuid d9cb0583-a454-4df1-80f9-dc9f184101f2 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.820 2 DEBUG nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] End _get_guest_xml xml=<domain type="kvm">
Oct 09 16:45:03 compute-0 nova_compute[117331]:   <uuid>d9cb0583-a454-4df1-80f9-dc9f184101f2</uuid>
Oct 09 16:45:03 compute-0 nova_compute[117331]:   <name>instance-00000022</name>
Oct 09 16:45:03 compute-0 nova_compute[117331]:   <memory>131072</memory>
Oct 09 16:45:03 compute-0 nova_compute[117331]:   <vcpu>1</vcpu>
Oct 09 16:45:03 compute-0 nova_compute[117331]:   <metadata>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <nova:package version="32.1.0-0.20251008143712.076498e.el10"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <nova:name>tempest-TestExecuteZoneMigrationStrategyVolume-server-1045211055</nova:name>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <nova:creationTime>2025-10-09 16:45:03</nova:creationTime>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <nova:flavor name="m1.nano" id="5aeb5bf9-70f3-4ab6-b418-2c3319a7d2b3">
Oct 09 16:45:03 compute-0 nova_compute[117331]:         <nova:memory>128</nova:memory>
Oct 09 16:45:03 compute-0 nova_compute[117331]:         <nova:disk>1</nova:disk>
Oct 09 16:45:03 compute-0 nova_compute[117331]:         <nova:swap>0</nova:swap>
Oct 09 16:45:03 compute-0 nova_compute[117331]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 16:45:03 compute-0 nova_compute[117331]:         <nova:vcpus>1</nova:vcpus>
Oct 09 16:45:03 compute-0 nova_compute[117331]:         <nova:extraSpecs>
Oct 09 16:45:03 compute-0 nova_compute[117331]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 09 16:45:03 compute-0 nova_compute[117331]:         </nova:extraSpecs>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       </nova:flavor>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <nova:image uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee">
Oct 09 16:45:03 compute-0 nova_compute[117331]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 09 16:45:03 compute-0 nova_compute[117331]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 09 16:45:03 compute-0 nova_compute[117331]:         <nova:minDisk>1</nova:minDisk>
Oct 09 16:45:03 compute-0 nova_compute[117331]:         <nova:minRam>0</nova:minRam>
Oct 09 16:45:03 compute-0 nova_compute[117331]:         <nova:properties>
Oct 09 16:45:03 compute-0 nova_compute[117331]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 09 16:45:03 compute-0 nova_compute[117331]:         </nova:properties>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       </nova:image>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <nova:owner>
Oct 09 16:45:03 compute-0 nova_compute[117331]:         <nova:user uuid="edda4b76bea746b7aec969bc10f68f14">tempest-TestExecuteZoneMigrationStrategyVolume-905162571-project-admin</nova:user>
Oct 09 16:45:03 compute-0 nova_compute[117331]:         <nova:project uuid="dbc5b562dfbf46d888285482e4fe52bb">tempest-TestExecuteZoneMigrationStrategyVolume-905162571</nova:project>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       </nova:owner>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <nova:root type="image" uuid="b7d6e0af-25e4-4227-9dc6-43143898ceee"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <nova:ports>
Oct 09 16:45:03 compute-0 nova_compute[117331]:         <nova:port uuid="b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e">
Oct 09 16:45:03 compute-0 nova_compute[117331]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:         </nova:port>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       </nova:ports>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     </nova:instance>
Oct 09 16:45:03 compute-0 nova_compute[117331]:   </metadata>
Oct 09 16:45:03 compute-0 nova_compute[117331]:   <sysinfo type="smbios">
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <system>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <entry name="manufacturer">RDO</entry>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <entry name="product">OpenStack Compute</entry>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <entry name="version">32.1.0-0.20251008143712.076498e.el10</entry>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <entry name="serial">d9cb0583-a454-4df1-80f9-dc9f184101f2</entry>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <entry name="uuid">d9cb0583-a454-4df1-80f9-dc9f184101f2</entry>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <entry name="family">Virtual Machine</entry>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     </system>
Oct 09 16:45:03 compute-0 nova_compute[117331]:   </sysinfo>
Oct 09 16:45:03 compute-0 nova_compute[117331]:   <os>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <boot dev="hd"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <smbios mode="sysinfo"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:   </os>
Oct 09 16:45:03 compute-0 nova_compute[117331]:   <features>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <acpi/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <apic/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <vmcoreinfo/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:   </features>
Oct 09 16:45:03 compute-0 nova_compute[117331]:   <clock offset="utc">
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <timer name="hpet" present="no"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:   </clock>
Oct 09 16:45:03 compute-0 nova_compute[117331]:   <cpu mode="host-model" match="exact">
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:   </cpu>
Oct 09 16:45:03 compute-0 nova_compute[117331]:   <devices>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <disk type="file" device="disk">
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/disk"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <target dev="vda" bus="virtio"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <disk type="file" device="cdrom">
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <driver name="qemu" type="raw" cache="none"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <source file="/var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/disk.config"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <target dev="sda" bus="sata"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     </disk>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <interface type="ethernet">
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <mac address="fa:16:3e:8b:c4:76"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <mtu size="1442"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <target dev="tapb7eb3265-ba"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     </interface>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <serial type="pty">
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <log file="/var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/console.log" append="off"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     </serial>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <video>
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <model type="virtio"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     </video>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <input type="tablet" bus="usb"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <rng model="virtio">
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <backend model="random">/dev/urandom</backend>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     </rng>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <controller type="usb" index="0"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 09 16:45:03 compute-0 nova_compute[117331]:       <stats period="10"/>
Oct 09 16:45:03 compute-0 nova_compute[117331]:     </memballoon>
Oct 09 16:45:03 compute-0 nova_compute[117331]:   </devices>
Oct 09 16:45:03 compute-0 nova_compute[117331]: </domain>
Oct 09 16:45:03 compute-0 nova_compute[117331]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.822 2 DEBUG nova.compute.manager [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Preparing to wait for external event network-vif-plugged-b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.823 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Acquiring lock "d9cb0583-a454-4df1-80f9-dc9f184101f2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.823 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.824 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.825 2 DEBUG nova.virt.libvirt.vif [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-10-09T16:44:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategyVolume-server-1045211055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategyvolume-server-104521105',id=34,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dbc5b562dfbf46d888285482e4fe52bb',ramdisk_id='',reservation_id='r-8s3dawuh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteZoneMigrationStrategyVolume-905162571',owner_user_name='tempest-TestExecuteZoneMigrationStrategyVolume-905162571-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T16:44:56Z,user_data=None,user_id='edda4b76bea746b7aec969bc10f68f14',uuid=d9cb0583-a454-4df1-80f9-dc9f184101f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "address": "fa:16:3e:8b:c4:76", "network": {"id": "052ac062-eaae-4bf0-a40e-d100eb37efd8", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategyVolume-1366874524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a62963f2914ca4a7e5f2a6b9a36e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7eb3265-ba", "ovs_interfaceid": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.826 2 DEBUG nova.network.os_vif_util [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Converting VIF {"id": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "address": "fa:16:3e:8b:c4:76", "network": {"id": "052ac062-eaae-4bf0-a40e-d100eb37efd8", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategyVolume-1366874524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a62963f2914ca4a7e5f2a6b9a36e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7eb3265-ba", "ovs_interfaceid": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.827 2 DEBUG nova.network.os_vif_util [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:c4:76,bridge_name='br-int',has_traffic_filtering=True,id=b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e,network=Network(052ac062-eaae-4bf0-a40e-d100eb37efd8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7eb3265-ba') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.828 2 DEBUG os_vif [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:c4:76,bridge_name='br-int',has_traffic_filtering=True,id=b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e,network=Network(052ac062-eaae-4bf0-a40e-d100eb37efd8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7eb3265-ba') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.830 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.831 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.837 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'c1969730-a938-58fd-a104-7e7945c955e0', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.847 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7eb3265-ba, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.848 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapb7eb3265-ba, col_values=(('qos', UUID('afa3e686-bd02-466d-bec1-42f93daaed13')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.848 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapb7eb3265-ba, col_values=(('external_ids', {'iface-id': 'b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:c4:76', 'vm-uuid': 'd9cb0583-a454-4df1-80f9-dc9f184101f2'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:03 compute-0 NetworkManager[1028]: <info>  [1760028303.8511] manager: (tapb7eb3265-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:03 compute-0 nova_compute[117331]: 2025-10-09 16:45:03.859 2 INFO os_vif [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:c4:76,bridge_name='br-int',has_traffic_filtering=True,id=b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e,network=Network(052ac062-eaae-4bf0-a40e-d100eb37efd8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7eb3265-ba')
Oct 09 16:45:04 compute-0 sshd-session[155558]: Invalid user debian from 124.60.67.43 port 50478
Oct 09 16:45:05 compute-0 nova_compute[117331]: 2025-10-09 16:45:05.405 2 DEBUG nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:45:05 compute-0 nova_compute[117331]: 2025-10-09 16:45:05.406 2 DEBUG nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:45:05 compute-0 nova_compute[117331]: 2025-10-09 16:45:05.406 2 DEBUG nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] No VIF found with MAC fa:16:3e:8b:c4:76, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:45:05 compute-0 nova_compute[117331]: 2025-10-09 16:45:05.406 2 INFO nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Using config drive
Oct 09 16:45:05 compute-0 nova_compute[117331]: 2025-10-09 16:45:05.915 2 WARNING neutronclient.v2_0.client [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:45:06 compute-0 nova_compute[117331]: 2025-10-09 16:45:06.354 2 INFO nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Creating config drive at /var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/disk.config
Oct 09 16:45:06 compute-0 nova_compute[117331]: 2025-10-09 16:45:06.359 2 DEBUG oslo_concurrency.processutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpaowue2k5 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:45:06 compute-0 nova_compute[117331]: 2025-10-09 16:45:06.499 2 DEBUG oslo_concurrency.processutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008143712.076498e.el10 -quiet -J -r -V config-2 /tmp/tmpaowue2k5" returned: 0 in 0.139s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:45:06 compute-0 kernel: tapb7eb3265-ba: entered promiscuous mode
Oct 09 16:45:06 compute-0 NetworkManager[1028]: <info>  [1760028306.5526] manager: (tapb7eb3265-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/111)
Oct 09 16:45:06 compute-0 ovn_controller[19752]: 2025-10-09T16:45:06Z|00306|binding|INFO|Claiming lport b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e for this chassis.
Oct 09 16:45:06 compute-0 ovn_controller[19752]: 2025-10-09T16:45:06Z|00307|binding|INFO|b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e: Claiming fa:16:3e:8b:c4:76 10.100.0.9
Oct 09 16:45:06 compute-0 nova_compute[117331]: 2025-10-09 16:45:06.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:06 compute-0 systemd-udevd[155639]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.587 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:c4:76 10.100.0.9'], port_security=['fa:16:3e:8b:c4:76 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd9cb0583-a454-4df1-80f9-dc9f184101f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052ac062-eaae-4bf0-a40e-d100eb37efd8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbc5b562dfbf46d888285482e4fe52bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '279998fa-fe6f-4d20-a70e-de82ebfbc63a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d539eb18-5c5b-45b9-86bd-feaec6a9254a, chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.588 28613 INFO neutron.agent.ovn.metadata.agent [-] Port b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e in datapath 052ac062-eaae-4bf0-a40e-d100eb37efd8 bound to our chassis
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.589 28613 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 052ac062-eaae-4bf0-a40e-d100eb37efd8
Oct 09 16:45:06 compute-0 NetworkManager[1028]: <info>  [1760028306.6008] device (tapb7eb3265-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 16:45:06 compute-0 NetworkManager[1028]: <info>  [1760028306.6017] device (tapb7eb3265-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.602 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[598e9e59-b723-4d0b-b2a3-279b7252deb1]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.603 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap052ac062-e1 in ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.604 139687 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap052ac062-e0 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.604 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[64d1c2f3-aeb7-44c3-8427-293cf30fa4a2]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.605 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[3aa0a5f8-a04d-481f-8c93-af70bf247899]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.616 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[85a2f51f-49c2-4dc0-951b-1bf716fad20b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:06 compute-0 systemd-machined[77487]: New machine qemu-26-instance-00000022.
Oct 09 16:45:06 compute-0 nova_compute[117331]: 2025-10-09 16:45:06.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:06 compute-0 nova_compute[117331]: 2025-10-09 16:45:06.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:06 compute-0 ovn_controller[19752]: 2025-10-09T16:45:06Z|00308|binding|INFO|Setting lport b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e ovn-installed in OVS
Oct 09 16:45:06 compute-0 ovn_controller[19752]: 2025-10-09T16:45:06Z|00309|binding|INFO|Setting lport b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e up in Southbound
Oct 09 16:45:06 compute-0 nova_compute[117331]: 2025-10-09 16:45:06.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.639 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[97f9f91d-485d-4243-a9ce-e630f52a5cad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:06 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-00000022.
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.672 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[45fec94e-205e-4671-a601-ef1e67bd7cf5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.676 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[4c50769f-5977-4ba3-92fe-0ff977375407]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:06 compute-0 NetworkManager[1028]: <info>  [1760028306.6773] manager: (tap052ac062-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/112)
Oct 09 16:45:06 compute-0 systemd-udevd[155644]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.708 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[f4164051-43de-4001-ad95-e84cb16f4125]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.711 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[acfdfe0b-5ef8-4b8d-9b75-4b31f65693c0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:06 compute-0 NetworkManager[1028]: <info>  [1760028306.7374] device (tap052ac062-e0): carrier: link connected
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.744 141048 DEBUG oslo.privsep.daemon [-] privsep: reply[93a8a96a-cead-44bb-a31d-46c1cb5def41]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.762 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[7b5f3095-37f1-45ec-8161-c21e5eb3c7f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052ac062-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:0b:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 325034, 'reachable_time': 33767, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 155675, 'error': None, 'target': 'ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.783 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[4d032c49-0a4c-4208-99f8-ca560b3f7f67]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:b60'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 325034, 'tstamp': 325034}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 155676, 'error': None, 'target': 'ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.802 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[46b0b4f8-80a2-446f-8e4e-2c68efeb4391]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052ac062-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:0b:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 325034, 'reachable_time': 33767, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 155677, 'error': None, 'target': 'ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:06 compute-0 sshd-session[155558]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:45:06 compute-0 sshd-session[155558]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43
Oct 09 16:45:06 compute-0 nova_compute[117331]: 2025-10-09 16:45:06.810 2 DEBUG nova.compute.manager [req-576ed723-0e26-4bae-bd94-90d2cccba3a3 req-12e2df78-05b0-41b7-b275-41813b5441b7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Received event network-vif-plugged-b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:45:06 compute-0 nova_compute[117331]: 2025-10-09 16:45:06.810 2 DEBUG oslo_concurrency.lockutils [req-576ed723-0e26-4bae-bd94-90d2cccba3a3 req-12e2df78-05b0-41b7-b275-41813b5441b7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "d9cb0583-a454-4df1-80f9-dc9f184101f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:45:06 compute-0 nova_compute[117331]: 2025-10-09 16:45:06.810 2 DEBUG oslo_concurrency.lockutils [req-576ed723-0e26-4bae-bd94-90d2cccba3a3 req-12e2df78-05b0-41b7-b275-41813b5441b7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:45:06 compute-0 nova_compute[117331]: 2025-10-09 16:45:06.810 2 DEBUG oslo_concurrency.lockutils [req-576ed723-0e26-4bae-bd94-90d2cccba3a3 req-12e2df78-05b0-41b7-b275-41813b5441b7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:45:06 compute-0 nova_compute[117331]: 2025-10-09 16:45:06.811 2 DEBUG nova.compute.manager [req-576ed723-0e26-4bae-bd94-90d2cccba3a3 req-12e2df78-05b0-41b7-b275-41813b5441b7 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Processing event network-vif-plugged-b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.839 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[62cbdb7d-bd7f-40f9-90e7-36587e52f53e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.903 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[506ec250-809a-4998-ae28-e98b99a03798]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.905 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052ac062-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.905 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.905 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap052ac062-e0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:45:06 compute-0 NetworkManager[1028]: <info>  [1760028306.9074] manager: (tap052ac062-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Oct 09 16:45:06 compute-0 nova_compute[117331]: 2025-10-09 16:45:06.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:06 compute-0 kernel: tap052ac062-e0: entered promiscuous mode
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.909 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap052ac062-e0, col_values=(('external_ids', {'iface-id': '6594e9c2-70fd-4a14-afe1-fd5006870857'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:45:06 compute-0 ovn_controller[19752]: 2025-10-09T16:45:06Z|00310|binding|INFO|Releasing lport 6594e9c2-70fd-4a14-afe1-fd5006870857 from this chassis (sb_readonly=0)
Oct 09 16:45:06 compute-0 nova_compute[117331]: 2025-10-09 16:45:06.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:06 compute-0 nova_compute[117331]: 2025-10-09 16:45:06.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.922 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[6550213b-6fcb-480c-81e7-a11a5f94c8ee]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.924 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/052ac062-eaae-4bf0-a40e-d100eb37efd8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/052ac062-eaae-4bf0-a40e-d100eb37efd8.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.924 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/052ac062-eaae-4bf0-a40e-d100eb37efd8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/052ac062-eaae-4bf0-a40e-d100eb37efd8.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.924 28613 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 052ac062-eaae-4bf0-a40e-d100eb37efd8 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.924 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/052ac062-eaae-4bf0-a40e-d100eb37efd8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/052ac062-eaae-4bf0-a40e-d100eb37efd8.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.924 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[0353a2bf-86d8-451c-a580-87dc1528fefd]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.925 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/052ac062-eaae-4bf0-a40e-d100eb37efd8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/052ac062-eaae-4bf0-a40e-d100eb37efd8.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.925 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9b9d2c-f79d-4b71-bd73-61eea768376d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.925 28613 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: global
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     log         /dev/log local0 debug
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     log-tag     haproxy-metadata-proxy-052ac062-eaae-4bf0-a40e-d100eb37efd8
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     user        root
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     group       root
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     maxconn     1024
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     pidfile     /var/lib/neutron/external/pids/052ac062-eaae-4bf0-a40e-d100eb37efd8.pid.haproxy
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     daemon
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: defaults
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     log global
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     mode http
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     option httplog
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     option dontlognull
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     option http-server-close
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     option forwardfor
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     retries                 3
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     timeout http-request    30s
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     timeout connect         30s
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     timeout client          32s
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     timeout server          32s
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     timeout http-keep-alive 30s
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: listen listener
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     bind 169.254.169.254:80
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:     http-request add-header X-OVN-Network-ID 052ac062-eaae-4bf0-a40e-d100eb37efd8
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 09 16:45:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:06.926 28613 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8', 'env', 'PROCESS_TAG=haproxy-052ac062-eaae-4bf0-a40e-d100eb37efd8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/052ac062-eaae-4bf0-a40e-d100eb37efd8.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 09 16:45:07 compute-0 podman[155716]: 2025-10-09 16:45:07.312120106 +0000 UTC m=+0.053184072 container create a0afccb9e54c26db32c31c4bdf89bfc2c97f5f764fff8cd14b9042522835cdfe (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:45:07 compute-0 nova_compute[117331]: 2025-10-09 16:45:07.338 2 DEBUG nova.compute.manager [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 09 16:45:07 compute-0 nova_compute[117331]: 2025-10-09 16:45:07.346 2 DEBUG nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 09 16:45:07 compute-0 nova_compute[117331]: 2025-10-09 16:45:07.351 2 INFO nova.virt.libvirt.driver [-] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Instance spawned successfully.
Oct 09 16:45:07 compute-0 nova_compute[117331]: 2025-10-09 16:45:07.352 2 DEBUG nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 09 16:45:07 compute-0 systemd[1]: Started libpod-conmon-a0afccb9e54c26db32c31c4bdf89bfc2c97f5f764fff8cd14b9042522835cdfe.scope.
Oct 09 16:45:07 compute-0 podman[155716]: 2025-10-09 16:45:07.282765201 +0000 UTC m=+0.023829147 image pull 4e15c189a95c5dcd1203c76fe304de5c8f8616da9a258ca168cc54a517f49b87 38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 09 16:45:07 compute-0 systemd[1]: Started libcrun container.
Oct 09 16:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba56a72677d74942829417ac261d69735349962a64bb8195beb8fd6da3538b0e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 16:45:07 compute-0 podman[155716]: 2025-10-09 16:45:07.428590531 +0000 UTC m=+0.169654507 container init a0afccb9e54c26db32c31c4bdf89bfc2c97f5f764fff8cd14b9042522835cdfe (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:45:07 compute-0 podman[155716]: 2025-10-09 16:45:07.438592882 +0000 UTC m=+0.179656848 container start a0afccb9e54c26db32c31c4bdf89bfc2c97f5f764fff8cd14b9042522835cdfe (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 09 16:45:07 compute-0 neutron-haproxy-ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8[155731]: [NOTICE]   (155735) : New worker (155737) forked
Oct 09 16:45:07 compute-0 neutron-haproxy-ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8[155731]: [NOTICE]   (155735) : Loading success.
Oct 09 16:45:07 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:07.585 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:45:07 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:07.586 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:45:07 compute-0 nova_compute[117331]: 2025-10-09 16:45:07.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:07 compute-0 nova_compute[117331]: 2025-10-09 16:45:07.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:07 compute-0 nova_compute[117331]: 2025-10-09 16:45:07.869 2 DEBUG nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:45:07 compute-0 nova_compute[117331]: 2025-10-09 16:45:07.870 2 DEBUG nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:45:07 compute-0 nova_compute[117331]: 2025-10-09 16:45:07.870 2 DEBUG nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:45:07 compute-0 nova_compute[117331]: 2025-10-09 16:45:07.871 2 DEBUG nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:45:07 compute-0 nova_compute[117331]: 2025-10-09 16:45:07.871 2 DEBUG nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:45:07 compute-0 nova_compute[117331]: 2025-10-09 16:45:07.872 2 DEBUG nova.virt.libvirt.driver [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 09 16:45:08 compute-0 nova_compute[117331]: 2025-10-09 16:45:08.382 2 INFO nova.compute.manager [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Took 10.57 seconds to spawn the instance on the hypervisor.
Oct 09 16:45:08 compute-0 nova_compute[117331]: 2025-10-09 16:45:08.383 2 DEBUG nova.compute.manager [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 09 16:45:08 compute-0 sshd-session[155558]: Failed password for invalid user debian from 124.60.67.43 port 50478 ssh2
Oct 09 16:45:08 compute-0 sshd-session[155558]: Connection closed by invalid user debian 124.60.67.43 port 50478 [preauth]
Oct 09 16:45:08 compute-0 nova_compute[117331]: 2025-10-09 16:45:08.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:08 compute-0 nova_compute[117331]: 2025-10-09 16:45:08.872 2 DEBUG nova.compute.manager [req-612ff250-8d65-44cc-aae2-871e57fa7e26 req-f6ca9f19-1442-412d-9a81-5cc97fa24179 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Received event network-vif-plugged-b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:45:08 compute-0 nova_compute[117331]: 2025-10-09 16:45:08.873 2 DEBUG oslo_concurrency.lockutils [req-612ff250-8d65-44cc-aae2-871e57fa7e26 req-f6ca9f19-1442-412d-9a81-5cc97fa24179 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "d9cb0583-a454-4df1-80f9-dc9f184101f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:45:08 compute-0 nova_compute[117331]: 2025-10-09 16:45:08.874 2 DEBUG oslo_concurrency.lockutils [req-612ff250-8d65-44cc-aae2-871e57fa7e26 req-f6ca9f19-1442-412d-9a81-5cc97fa24179 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:45:08 compute-0 nova_compute[117331]: 2025-10-09 16:45:08.874 2 DEBUG oslo_concurrency.lockutils [req-612ff250-8d65-44cc-aae2-871e57fa7e26 req-f6ca9f19-1442-412d-9a81-5cc97fa24179 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:45:08 compute-0 nova_compute[117331]: 2025-10-09 16:45:08.875 2 DEBUG nova.compute.manager [req-612ff250-8d65-44cc-aae2-871e57fa7e26 req-f6ca9f19-1442-412d-9a81-5cc97fa24179 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] No waiting events found dispatching network-vif-plugged-b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:45:08 compute-0 nova_compute[117331]: 2025-10-09 16:45:08.875 2 WARNING nova.compute.manager [req-612ff250-8d65-44cc-aae2-871e57fa7e26 req-f6ca9f19-1442-412d-9a81-5cc97fa24179 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Received unexpected event network-vif-plugged-b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e for instance with vm_state active and task_state None.
Oct 09 16:45:08 compute-0 nova_compute[117331]: 2025-10-09 16:45:08.971 2 INFO nova.compute.manager [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Took 15.87 seconds to build instance.
Oct 09 16:45:09 compute-0 nova_compute[117331]: 2025-10-09 16:45:09.477 2 DEBUG oslo_concurrency.lockutils [None req-90a86517-4844-4bdd-af5f-521a32a05ae4 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.393s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:45:12 compute-0 nova_compute[117331]: 2025-10-09 16:45:12.238 2 DEBUG oslo_concurrency.lockutils [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Acquiring lock "d9cb0583-a454-4df1-80f9-dc9f184101f2" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:45:12 compute-0 nova_compute[117331]: 2025-10-09 16:45:12.239 2 DEBUG oslo_concurrency.lockutils [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:45:12 compute-0 nova_compute[117331]: 2025-10-09 16:45:12.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:12 compute-0 nova_compute[117331]: 2025-10-09 16:45:12.750 2 DEBUG nova.objects.instance [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lazy-loading 'flavor' on Instance uuid d9cb0583-a454-4df1-80f9-dc9f184101f2 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:45:12 compute-0 podman[155749]: 2025-10-09 16:45:12.860520859 +0000 UTC m=+0.085977747 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0)
Oct 09 16:45:13 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:13.588 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:45:13 compute-0 nova_compute[117331]: 2025-10-09 16:45:13.767 2 DEBUG oslo_concurrency.lockutils [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 1.528s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:45:13 compute-0 nova_compute[117331]: 2025-10-09 16:45:13.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:13 compute-0 sshd-session[155747]: Invalid user pi from 124.60.67.43 port 34492
Oct 09 16:45:14 compute-0 sshd-session[155747]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:45:14 compute-0 sshd-session[155747]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43
Oct 09 16:45:14 compute-0 nova_compute[117331]: 2025-10-09 16:45:14.928 2 DEBUG oslo_concurrency.lockutils [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Acquiring lock "d9cb0583-a454-4df1-80f9-dc9f184101f2" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:45:14 compute-0 nova_compute[117331]: 2025-10-09 16:45:14.931 2 DEBUG oslo_concurrency.lockutils [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.003s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:45:14 compute-0 nova_compute[117331]: 2025-10-09 16:45:14.932 2 INFO nova.compute.manager [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Attaching volume e877a3bb-b53d-4cbb-b2a6-4c88f284da4f to /dev/vdb
Oct 09 16:45:14 compute-0 nova_compute[117331]: 2025-10-09 16:45:14.933 2 DEBUG nova.objects.instance [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lazy-loading 'flavor' on Instance uuid d9cb0583-a454-4df1-80f9-dc9f184101f2 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:45:15 compute-0 nova_compute[117331]: 2025-10-09 16:45:15.586 2 DEBUG os_brick.utils [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.12/site-packages/os_brick/utils.py:177
Oct 09 16:45:15 compute-0 nova_compute[117331]: 2025-10-09 16:45:15.588 2 INFO oslo.privsep.daemon [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpoiayrkn6/privsep.sock']
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.358 2 INFO oslo.privsep.daemon [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Spawned new privsep daemon via rootwrap
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.212 843 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.217 843 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.220 843 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_READ_SEARCH|CAP_SYS_ADMIN/CAP_DAC_READ_SEARCH|CAP_SYS_ADMIN/none
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.220 843 INFO oslo.privsep.daemon [-] privsep daemon running as pid 843
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.362 843 DEBUG oslo.privsep.daemon [-] privsep: reply[317e8b9b-1e41-44fc-8016-95ae35cb2fd8]: (2,) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:16 compute-0 sshd-session[155747]: Failed password for invalid user pi from 124.60.67.43 port 34492 ssh2
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.452 843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.460 843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.460 843 DEBUG oslo.privsep.daemon [-] privsep: reply[8a35a8f5-87cf-4401-ab95-870f080065df]: (4, ('InitiatorName=iqn.1994-05.com.redhat:e0e4bf961d5d', '')) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.462 843 DEBUG oslo.privsep.daemon [-] privsep: Exception during request[4e689258-a92c-4597-96b6-2e37a39e37b5]: [Errno 2] No such file or directory: '/dev/scini' _process_cmd /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:492
Oct 09 16:45:16 compute-0 nova_compute[117331]: Traceback (most recent call last):
Oct 09 16:45:16 compute-0 nova_compute[117331]:   File "/usr/lib/python3.12/site-packages/oslo_privsep/daemon.py", line 489, in _process_cmd
Oct 09 16:45:16 compute-0 nova_compute[117331]:     ret = func(*f_args, **f_kwargs)
Oct 09 16:45:16 compute-0 nova_compute[117331]:           ^^^^^^^^^^^^^^^^^^^^^^^^^
Oct 09 16:45:16 compute-0 nova_compute[117331]:   File "/usr/lib/python3.12/site-packages/oslo_privsep/priv_context.py", line 270, in _wrap
Oct 09 16:45:16 compute-0 nova_compute[117331]:     return func(*args, **kwargs)
Oct 09 16:45:16 compute-0 nova_compute[117331]:            ^^^^^^^^^^^^^^^^^^^^^
Oct 09 16:45:16 compute-0 nova_compute[117331]:   File "/usr/lib/python3.12/site-packages/os_brick/privileged/scaleio.py", line 57, in get_guid
Oct 09 16:45:16 compute-0 nova_compute[117331]:     with open_scini_device() as fd:
Oct 09 16:45:16 compute-0 nova_compute[117331]:          ^^^^^^^^^^^^^^^^^^^
Oct 09 16:45:16 compute-0 nova_compute[117331]:   File "/usr/lib64/python3.12/contextlib.py", line 137, in __enter__
Oct 09 16:45:16 compute-0 nova_compute[117331]:     return next(self.gen)
Oct 09 16:45:16 compute-0 nova_compute[117331]:            ^^^^^^^^^^^^^^
Oct 09 16:45:16 compute-0 nova_compute[117331]:   File "/usr/lib/python3.12/site-packages/os_brick/privileged/scaleio.py", line 40, in open_scini_device
Oct 09 16:45:16 compute-0 nova_compute[117331]:     fd = os.open(SCINI_DEVICE_PATH, os.O_RDWR)
Oct 09 16:45:16 compute-0 nova_compute[117331]:          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Oct 09 16:45:16 compute-0 nova_compute[117331]: FileNotFoundError: [Errno 2] No such file or directory: '/dev/scini'
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.465 843 DEBUG oslo.privsep.daemon [-] privsep: reply[4e689258-a92c-4597-96b6-2e37a39e37b5]: (5, 'builtins.FileNotFoundError', (2, 'No such file or directory'), 'Traceback (most recent call last):\n  File "/usr/lib/python3.12/site-packages/oslo_privsep/daemon.py", line 489, in _process_cmd\n    ret = func(*f_args, **f_kwargs)\n          ^^^^^^^^^^^^^^^^^^^^^^^^^\n  File "/usr/lib/python3.12/site-packages/oslo_privsep/priv_context.py", line 270, in _wrap\n    return func(*args, **kwargs)\n           ^^^^^^^^^^^^^^^^^^^^^\n  File "/usr/lib/python3.12/site-packages/os_brick/privileged/scaleio.py", line 57, in get_guid\n    with open_scini_device() as fd:\n         ^^^^^^^^^^^^^^^^^^^\n  File "/usr/lib64/python3.12/contextlib.py", line 137, in __enter__\n    return next(self.gen)\n           ^^^^^^^^^^^^^^\n  File "/usr/lib/python3.12/site-packages/os_brick/privileged/scaleio.py", line 40, in open_scini_device\n    fd = os.open(SCINI_DEVICE_PATH, os.O_RDWR)\n         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFileNotFoundError: [Errno 2] No such file or directory: \'/dev/scini\'\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.468 2 ERROR os_brick.initiator.connectors.scaleio [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Error querying sdc guid: [Errno 2] No such file or directory: FileNotFoundError: [Errno 2] No such file or directory
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.469 2 INFO os_brick.initiator.connectors.scaleio [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Unable to find SDC guid: Error querying sdc guid: [Errno 2] No such file or directory
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.470 843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.483 843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.484 843 DEBUG oslo.privsep.daemon [-] privsep: reply[9e47f38d-d0c2-452a-8815-5ae115998fbf]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.485 843 DEBUG oslo.privsep.daemon [-] privsep: reply[93100ae6-67b8-4ea1-9907-493183480d3d]: (4, '656dbd27-58bc-413d-a3c8-085dadc82fd6') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.486 2 DEBUG oslo_concurrency.processutils [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.518 2 DEBUG oslo_concurrency.processutils [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.522 2 DEBUG os_brick.initiator.connectors.lightos [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.12/site-packages/os_brick/initiator/connectors/lightos.py:132
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.525 2 INFO os_brick.initiator.connectors.lightos [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Current host hostNQN nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 and IP(s) are ['38.102.83.110', '172.17.0.101', '192.168.122.100', '172.19.0.101', '172.18.0.101', 'fe80::e485:78ff:fecc:b265', 'fe80::fc16:3eff:fe8b:c476', 'fe80::504d:1dff:fe40:feee'] 
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.526 2 DEBUG os_brick.initiator.connectors.lightos [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.12/site-packages/os_brick/initiator/connectors/lightos.py:109
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.526 2 DEBUG os_brick.initiator.connectors.lightos [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.12/site-packages/os_brick/initiator/connectors/lightos.py:112
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.526 2 DEBUG os_brick.utils [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] <== get_connector_properties: return (939ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'enforce_multipath': True, 'initiator': 'iqn.1994-05.com.redhat:e0e4bf961d5d', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '656dbd27-58bc-413d-a3c8-085dadc82fd6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': '', 'host_ips': ['38.102.83.110', '172.17.0.101', '192.168.122.100', '172.19.0.101', '172.18.0.101', 'fe80::e485:78ff:fecc:b265', 'fe80::fc16:3eff:fe8b:c476', 'fe80::504d:1dff:fe40:feee']} trace_logging_wrapper /usr/lib/python3.12/site-packages/os_brick/utils.py:204
Oct 09 16:45:16 compute-0 nova_compute[117331]: 2025-10-09 16:45:16.527 2 DEBUG nova.virt.block_device [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Updating existing volume attachment record: a31cd34a-4b9f-44a9-907a-52fb215d376c _volume_attach /usr/lib/python3.12/site-packages/nova/virt/block_device.py:666
Oct 09 16:45:16 compute-0 podman[155779]: 2025-10-09 16:45:16.854931206 +0000 UTC m=+0.088866748 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:45:17 compute-0 nova_compute[117331]: 2025-10-09 16:45:17.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:17 compute-0 ovn_controller[19752]: 2025-10-09T16:45:17Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:c4:76 10.100.0.9
Oct 09 16:45:17 compute-0 ovn_controller[19752]: 2025-10-09T16:45:17Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:c4:76 10.100.0.9
Oct 09 16:45:18 compute-0 nova_compute[117331]: 2025-10-09 16:45:18.628 2 DEBUG oslo_concurrency.lockutils [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:45:18 compute-0 nova_compute[117331]: 2025-10-09 16:45:18.629 2 DEBUG oslo_concurrency.lockutils [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:45:18 compute-0 nova_compute[117331]: 2025-10-09 16:45:18.629 2 DEBUG oslo_concurrency.lockutils [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:45:18 compute-0 nova_compute[117331]: 2025-10-09 16:45:18.629 2 DEBUG nova.virt.libvirt.volume.mount [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Got _HostMountState generation 0 get_state /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/mount.py:91
Oct 09 16:45:18 compute-0 nova_compute[117331]: 2025-10-09 16:45:18.630 2 DEBUG nova.virt.libvirt.volume.mount [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] _HostMountState.mount(fstype=nfs, export=172.18.0.101:/data/cinder_backend_2, vol_name=volume-e877a3bb-b53d-4cbb-b2a6-4c88f284da4f, /var/lib/nova/mnt/56491fbd7607c943b28b644cf81cf3ff, options=[]) generation 0 mount /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/mount.py:288
Oct 09 16:45:18 compute-0 nova_compute[117331]: 2025-10-09 16:45:18.630 2 DEBUG nova.virt.libvirt.volume.mount [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Mounting /var/lib/nova/mnt/56491fbd7607c943b28b644cf81cf3ff generation 0 mount /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/mount.py:301
Oct 09 16:45:18 compute-0 sshd-session[155747]: Connection closed by invalid user pi 124.60.67.43 port 34492 [preauth]
Oct 09 16:45:18 compute-0 kernel: FS-Cache: Loaded
Oct 09 16:45:18 compute-0 nova_compute[117331]: 2025-10-09 16:45:18.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:18 compute-0 kernel: Key type dns_resolver registered
Oct 09 16:45:19 compute-0 kernel: NFS: Registering the id_resolver key type
Oct 09 16:45:19 compute-0 kernel: Key type id_resolver registered
Oct 09 16:45:19 compute-0 kernel: Key type id_legacy registered
Oct 09 16:45:19 compute-0 nfsrahead[155829]: setting /var/lib/nova/mnt/56491fbd7607c943b28b644cf81cf3ff readahead to 128
Oct 09 16:45:19 compute-0 nova_compute[117331]: ========================================================================
Oct 09 16:45:19 compute-0 nova_compute[117331]: ====                        Guru Meditation                         ====
Oct 09 16:45:19 compute-0 nova_compute[117331]: ========================================================================
Oct 09 16:45:19 compute-0 nova_compute[117331]: ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ========================================================================
Oct 09 16:45:19 compute-0 nova_compute[117331]: ====                            Package                             ====
Oct 09 16:45:19 compute-0 nova_compute[117331]: ========================================================================
Oct 09 16:45:19 compute-0 nova_compute[117331]: product = OpenStack Compute
Oct 09 16:45:19 compute-0 nova_compute[117331]: vendor = RDO
Oct 09 16:45:19 compute-0 nova_compute[117331]: version = 32.1.0-0.20251008143712.076498e.el10
Oct 09 16:45:19 compute-0 nova_compute[117331]: ========================================================================
Oct 09 16:45:19 compute-0 nova_compute[117331]: ====                            Threads                             ====
Oct 09 16:45:19 compute-0 nova_compute[117331]: ========================================================================
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857179420352                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:214 in _native_thread
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `libvirt.virEventRunDefaultImpl()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/site-packages/libvirt.py:441 in virEventRunDefaultImpl
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `ret = libvirtmod.virEventRunDefaultImpl()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857187813056                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857196205760                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857204598464                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857212991168                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857221383872                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857372354240                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857380746944                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857389139648                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857397532352                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857405925056                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857414317760                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857422710464                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857500206784                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857508615872                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857517024960                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857525434048                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857533843136                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857542252224                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857550661312                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857559070400                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:74 in tworker
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `msg = _reqq.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/queue.py:171 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.not_empty.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                  Thread #139857720299136                   ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:48 in __thread_body
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `func(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:100 in wrap_bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_reports/guru_meditation_report.py:178 in _handler
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `cls.handle_signal(version, service_name, log_dir, None)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_reports/guru_meditation_report.py:217 in handle_signal
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `res = cls(version, frame).run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_reports/guru_meditation_report.py:266 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return super().run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_reports/report.py:76 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return "\n".join(str(sect) for sect in self.sections)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_reports/report.py:76 in <genexpr>
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return "\n".join(str(sect) for sect in self.sections)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_reports/report.py:101 in __str__
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.view(self.generator())`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_reports/report.py:130 in newgen
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `res = gen()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_reports/generators/threading.py:67 in __call__
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `thread_id: tm.ThreadModel(thread_id, stack)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ========================================================================
Oct 09 16:45:19 compute-0 nova_compute[117331]: ====                         Green Threads                          ====
Oct 09 16:45:19 compute-0 nova_compute[117331]: ========================================================================
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/bin/nova-compute:8 in <module>
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `sys.exit(main())`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/cmd/compute.py:62 in main
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `service.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/service.py:335 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `_launcher.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/service.py:300 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `status, signo = self._wait_for_exit_or_signal()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/service.py:278 in _wait_for_exit_or_signal
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `super().wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/service.py:213 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.services.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/service.py:690 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.tg.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/threadgroup.py:368 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._wait_threads()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/threadgroup.py:343 in _wait_threads
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._perform_action_on_threads(`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/threadgroup.py:270 in _perform_action_on_threads
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `action_func(x)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/threadgroup.py:344 in <lambda>
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `lambda x: x.wait(),`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/threadgroup.py:63 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.thread.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenthread.py:232 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self._exit_event.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/event.py:124 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `result = hub.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: 2025-10-09 16:45:19.348 2 DEBUG nova.virt.libvirt.volume.mount [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] _HostMountState.mount() for /var/lib/nova/mnt/56491fbd7607c943b28b644cf81cf3ff generation 0 completed successfully mount /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/mount.py:334
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:48 in __thread_body
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `func(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:100 in wrap_bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_messaging/_drivers/amqpdriver.py:577 in poll
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.conn.consume(timeout=current_timeout)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1477 in consume
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.ensure(_consume,`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1173 in ensure
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `ret, channel = autoretry_method()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/kombu/connection.py:556 in _ensured
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return fun(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/kombu/connection.py:639 in __call__
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return fun(*args, channel=channels[0], **kwargs), channels[0]`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1162 in execute_method
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `method()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1464 in _consume
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.connection.drain_events(timeout=poll_timeout)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/kombu/connection.py:341 in drain_events
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.transport.drain_events(self.connection, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/kombu/transport/pyamqp.py:171 in drain_events
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return connection.drain_events(**kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/amqp/connection.py:526 in drain_events
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `while not self.blocking_read(timeout):`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/amqp/connection.py:531 in blocking_read
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `frame = self.transport.read_frame()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/amqp/transport.py:294 in read_frame
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `frame_header = read(7, True)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/amqp/transport.py:574 in _read
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `s = recv(n - len(rbuf))  # see note above`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/ssl.py:196 in read
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self._call_trampolining(`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/ssl.py:169 in _call_trampolining
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `trampoline(self,`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/__init__.py:157 in trampoline
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return hub.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:48 in __thread_body
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `func(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:100 in wrap_bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1393 in _heartbeat_thread_job
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._heartbeat_exit_event.wait(`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:655 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `signaled = self._cond.wait(timeout)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:359 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `gotit = waiter.acquire(True, timeout)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/semaphore.py:107 in acquire
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `hubs.get_hub().switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:48 in __thread_body
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `func(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:100 in wrap_bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1393 in _heartbeat_thread_job
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._heartbeat_exit_event.wait(`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:655 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `signaled = self._cond.wait(timeout)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:359 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `gotit = waiter.acquire(True, timeout)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/semaphore.py:107 in acquire
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `hubs.get_hub().switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:48 in __thread_body
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `func(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:100 in wrap_bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1393 in _heartbeat_thread_job
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._heartbeat_exit_event.wait(`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:655 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `signaled = self._cond.wait(timeout)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:359 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `gotit = waiter.acquire(True, timeout)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/semaphore.py:107 in acquire
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `hubs.get_hub().switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:48 in __thread_body
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `func(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:100 in wrap_bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_privsep/comm.py:154 in _reader_main
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `for msg in reader:`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_privsep/comm.py:91 in __next__
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `buf = self.readsock.recv(4096)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/base.py:352 in recv
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self._recv_loop(self.fd.recv, b'', bufsize, flags)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/base.py:346 in _recv_loop
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._read_trampoline()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/base.py:314 in _read_trampoline
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._trampoline(`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/base.py:206 in _trampoline
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return trampoline(fd, read=read, write=write, timeout=timeout,`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/__init__.py:157 in trampoline
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return hub.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:48 in __thread_body
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `func(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:100 in wrap_bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_privsep/comm.py:154 in _reader_main
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `for msg in reader:`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_privsep/comm.py:91 in __next__
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `buf = self.readsock.recv(4096)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/base.py:352 in recv
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self._recv_loop(self.fd.recv, b'', bufsize, flags)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/base.py:346 in _recv_loop
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._read_trampoline()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/base.py:314 in _read_trampoline
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._trampoline(`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/base.py:206 in _trampoline
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return trampoline(fd, read=read, write=write, timeout=timeout,`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/__init__.py:157 in trampoline
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return hub.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:48 in __thread_body
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `func(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:100 in wrap_bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_privsep/comm.py:154 in _reader_main
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `for msg in reader:`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_privsep/comm.py:91 in __next__
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `buf = self.readsock.recv(4096)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/base.py:352 in recv
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self._recv_loop(self.fd.recv, b'', bufsize, flags)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/base.py:346 in _recv_loop
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._read_trampoline()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/base.py:314 in _read_trampoline
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._trampoline(`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/base.py:206 in _trampoline
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return trampoline(fd, read=read, write=write, timeout=timeout,`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/__init__.py:157 in trampoline
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return hub.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:48 in __thread_body
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `func(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:100 in wrap_bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:267 in logger
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `for line in f:`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/py3.py:105 in readinto
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `data = self.read(up_to)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/py3.py:84 in read
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return _original_os.read(self._fileno, size)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/os.py:47 in read
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `hubs.trampoline(fd, read=True)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/__init__.py:157 in trampoline
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return hub.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:48 in __thread_body
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `func(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:100 in wrap_bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:267 in logger
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `for line in f:`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/py3.py:105 in readinto
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `data = self.read(up_to)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/py3.py:84 in read
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return _original_os.read(self._fileno, size)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/os.py:47 in read
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `hubs.trampoline(fd, read=True)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/__init__.py:157 in trampoline
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return hub.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:48 in __thread_body
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `func(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:100 in wrap_bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:267 in logger
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `for line in f:`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/py3.py:105 in readinto
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `data = self.read(up_to)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/py3.py:84 in read
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return _original_os.read(self._fileno, size)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/os.py:47 in read
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `hubs.trampoline(fd, read=True)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/__init__.py:157 in trampoline
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return hub.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:48 in __thread_body
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `func(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:100 in wrap_bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_utils/excutils.py:257 in wrapper
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return infunc(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_messaging/_drivers/base.py:294 in _runner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `incoming = self._poll_style_listener.poll(`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_messaging/_drivers/base.py:42 in wrapper
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `message = func(in_self, timeout=watch.leftover(True))`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_messaging/_drivers/amqpdriver.py:429 in poll
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.conn.consume(timeout=min(self._current_timeout, left))`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1477 in consume
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.ensure(_consume,`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1173 in ensure
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `ret, channel = autoretry_method()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/kombu/connection.py:556 in _ensured
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return fun(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/kombu/connection.py:639 in __call__
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return fun(*args, channel=channels[0], **kwargs), channels[0]`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1162 in execute_method
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `method()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1464 in _consume
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.connection.drain_events(timeout=poll_timeout)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/kombu/connection.py:341 in drain_events
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.transport.drain_events(self.connection, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/kombu/transport/pyamqp.py:171 in drain_events
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return connection.drain_events(**kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/amqp/connection.py:526 in drain_events
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `while not self.blocking_read(timeout):`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/amqp/connection.py:531 in blocking_read
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `frame = self.transport.read_frame()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/amqp/transport.py:294 in read_frame
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `frame_header = read(7, True)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/amqp/transport.py:574 in _read
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `s = recv(n - len(rbuf))  # see note above`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/ssl.py:196 in read
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self._call_trampolining(`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/ssl.py:169 in _call_trampolining
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `trampoline(self,`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/__init__.py:157 in trampoline
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return hub.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:48 in __thread_body
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `func(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1032 in _bootstrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/thread.py:100 in wrap_bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `bootstrap_inner()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1075 in _bootstrap_inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:1012 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._target(*self._args, **self._kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/connection.py:108 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.poller.block()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/site-packages/ovs/poller.py:231 in block
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `events = self.poll.poll(self.timeout)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/site-packages/ovs/poller.py:137 in poll
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `rlist, wlist, xlist = select.select(self.rlist,`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/select.py:80 in select
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return hub.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenpool.py:87 in _spawn_n_impl
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `func(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/futurist/_green.py:69 in __call__
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.work.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/futurist/_utils.py:45 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `result = self.fn(*self.args, **self.kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/utils.py:584 in context_wrapper
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return func(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:225 in _dispatch_thread
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._dispatch_events()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:393 in _dispatch_events
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `_c = self._event_notify_recv.read(1)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/py3.py:84 in read
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return _original_os.read(self._fileno, size)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/green/os.py:47 in read
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `hubs.trampoline(fd, read=True)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/__init__.py:157 in trampoline
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return hub.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenpool.py:87 in _spawn_n_impl
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `func(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/futurist/_green.py:69 in __call__
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.work.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/futurist/_utils.py:45 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `result = self.fn(*self.args, **self.kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/utils.py:584 in context_wrapper
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return func(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:233 in _conn_event_thread
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._dispatch_conn_event()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:239 in _dispatch_conn_event
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `handler = self._conn_event_handler_queue.get()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/queue.py:321 in get
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return waiter.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/queue.py:140 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return get_hub().switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenpool.py:87 in _spawn_n_impl
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `func(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/futurist/_green.py:69 in __call__
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.work.run()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/futurist/_utils.py:45 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `result = self.fn(*self.args, **self.kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_messaging/rpc/server.py:174 in _process_incoming
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `res = self.dispatcher.dispatch(message)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_messaging/rpc/dispatcher.py:309 in dispatch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self._do_dispatch(endpoint, method, ctxt, args)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_messaging/rpc/dispatcher.py:229 in _do_dispatch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `result = func(ctxt, **new_args)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/exception_wrapper.py:63 in wrapped
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return f(self, context, *args, **kw)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/compute/utils.py:1483 in decorated_function
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return function(self, context, *args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/compute/manager.py:202 in decorated_function
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return function(self, context, *args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/compute/manager.py:8094 in attach_volume
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `do_attach_volume(context, instance, driver_bdm)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:415 in inner
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return f(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/compute/manager.py:8089 in do_attach_volume
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self._attach_volume(context, instance, driver_bdm)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/compute/manager.py:8108 in _attach_volume
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `bdm.attach(context, instance, self.volume_api, self.driver,`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/virt/block_device.py:46 in wrapped
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `ret_val = method(obj, context, *args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/virt/block_device.py:769 in attach
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._do_attach(context, instance, volume, volume_api,`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/virt/block_device.py:754 in _do_attach
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._volume_attach(context, volume, connector, instance,`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/virt/block_device.py:692 in _volume_attach
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `virt_driver.attach_volume(`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:2293 in attach_volume
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._connect_volume(context, connection_info, instance,`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:2041 in _connect_volume
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `vol_driver.connect_volume(connection_info, instance)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/fs.py:113 in connect_volume
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `mount.mount(self.fstype, export, vol_name, mountpoint, instance,`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/mount.py:414 in mount
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `mount_state.mount(fstype, export, vol_name, mountpoint, instance,`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/mount.py:308 in mount
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `nova.privsep.fs.mount(fstype, export, mountpoint, options)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_privsep/priv_context.py:267 in _wrap
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.channel.remote_call(name, args, kwargs,`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:213 in remote_call
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `result = self.send_recv((comm.Message.CALL.value, name, args, kwargs),`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_privsep/comm.py:194 in send_recv
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `reply = future.result()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_privsep/comm.py:121 in result
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `if not self.condvar.wait(timeout=self.timeout):`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib64/python3.12/threading.py:355 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `waiter.acquire()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/semaphore.py:115 in acquire
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `hubs.get_hub().switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenthread.py:272 in main
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `result = function(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:161 in _run_loop
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._sleep(idle)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:109 in _sleep
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._abort.wait(timeout)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_utils/eventletutils.py:178 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `event.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/event.py:124 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `result = hub.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenthread.py:272 in main
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `result = function(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:161 in _run_loop
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._sleep(idle)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:109 in _sleep
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._abort.wait(timeout)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_utils/eventletutils.py:178 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `event.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/event.py:124 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `result = hub.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenthread.py:272 in main
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `result = function(*args, **kwargs)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/service.py:725 in run_service
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `done.wait()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/event.py:124 in wait
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `result = hub.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:352 in run
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self.fire_timers(self.clock())`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:471 in fire_timers
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `timer()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/timer.py:59 in __call__
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `cb(*args, **kw)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/tpool.py:56 in tpool_trampoline
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `_c = _rsock.recv(1)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/base.py:352 in recv
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self._recv_loop(self.fd.recv, b'', bufsize, flags)`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/base.py:346 in _recv_loop
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._read_trampoline()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/base.py:314 in _read_trampoline
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `self._trampoline(`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/greenio/base.py:206 in _trampoline
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return trampoline(fd, read=read, write=write, timeout=timeout,`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/__init__.py:157 in trampoline
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return hub.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: /usr/lib/python3.12/site-packages/eventlet/hubs/hub.py:310 in switch
Oct 09 16:45:19 compute-0 nova_compute[117331]:     `return self.greenlet.switch()`
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: No Traceback!
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: No Traceback!
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: No Traceback!
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: No Traceback!
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ------                        Green Thread                        ------
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: No Traceback!
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ========================================================================
Oct 09 16:45:19 compute-0 nova_compute[117331]: ====                           Processes                            ====
Oct 09 16:45:19 compute-0 nova_compute[117331]: ========================================================================
Oct 09 16:45:19 compute-0 nova_compute[117331]: Process 2 (under 1) [ run by: nova (42436), state: running ]
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ========================================================================
Oct 09 16:45:19 compute-0 nova_compute[117331]: ====                         Configuration                          ====
Oct 09 16:45:19 compute-0 nova_compute[117331]: ========================================================================
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: api: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   compute_link_prefix = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01
Oct 09 16:45:19 compute-0 nova_compute[117331]:   dhcp_domain = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enable_instance_password = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   glance_link_prefix = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   instance_list_cells_batch_fixed_size = 100
Oct 09 16:45:19 compute-0 nova_compute[117331]:   instance_list_cells_batch_strategy = distributed
Oct 09 16:45:19 compute-0 nova_compute[117331]:   instance_list_per_project_cells = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   list_records_by_skipping_down_cells = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   local_metadata_per_cell = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_limit = 1000
Oct 09 16:45:19 compute-0 nova_compute[117331]:   metadata_cache_expiration = 15
Oct 09 16:45:19 compute-0 nova_compute[117331]:   neutron_default_project_id = default
Oct 09 16:45:19 compute-0 nova_compute[117331]:   response_validation = warn
Oct 09 16:45:19 compute-0 nova_compute[117331]:   use_neutron_default_nets = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vendordata_dynamic_connect_timeout = 5
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vendordata_dynamic_failure_fatal = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vendordata_dynamic_read_timeout = 5
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vendordata_dynamic_ssl_certfile = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vendordata_dynamic_targets = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vendordata_jsonfile_path = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vendordata_providers = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     StaticJSON
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: api_database: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   asyncio_connection = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   asyncio_slave_connection = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   backend = sqlalchemy
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connection = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connection_debug = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connection_parameters = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connection_recycle_time = 3600
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connection_trace = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   db_inc_retry_interval = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   db_max_retries = 20
Oct 09 16:45:19 compute-0 nova_compute[117331]:   db_max_retry_interval = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   db_retry_interval = 1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_overflow = 50
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_pool_size = 5
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_retries = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   mysql_sql_mode = TRADITIONAL
Oct 09 16:45:19 compute-0 nova_compute[117331]:   mysql_wsrep_sync_wait = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   pool_timeout = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   retry_interval = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   slave_connection = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   sqlite_synchronous = True
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: barbican: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_endpoint = http://localhost/identity/v3
Oct 09 16:45:19 compute-0 nova_compute[117331]:   barbican_api_version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   barbican_endpoint = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   barbican_endpoint_type = internal
Oct 09 16:45:19 compute-0 nova_compute[117331]:   barbican_region_name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cafile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   certfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   collect-timing = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   insecure = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   keyfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   number_of_retries = 60
Oct 09 16:45:19 compute-0 nova_compute[117331]:   retry_delay = 1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   send_service_user_token = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   split-loggers = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   timeout = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   verify_ssl = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   verify_ssl_path = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: barbican_service_user: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_section = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_type = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cafile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   certfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   collect-timing = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   insecure = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   keyfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   split-loggers = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   timeout = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: cache: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   backend = oslo_cache.dict
Oct 09 16:45:19 compute-0 nova_compute[117331]:   backend_argument = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   backend_expiration_time = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   config_prefix = cache.oslo
Oct 09 16:45:19 compute-0 nova_compute[117331]:   dead_timeout = 60.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   debug_cache_backend = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enable_retry_client = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enable_socket_keepalive = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enabled = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enforce_fips_mode = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   expiration_time = 600
Oct 09 16:45:19 compute-0 nova_compute[117331]:   hashclient_retry_attempts = 2
Oct 09 16:45:19 compute-0 nova_compute[117331]:   hashclient_retry_delay = 1.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   memcache_dead_retry = 300
Oct 09 16:45:19 compute-0 nova_compute[117331]:   memcache_password = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   memcache_pool_connection_get_timeout = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   memcache_pool_flush_on_reconnect = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   memcache_pool_maxsize = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   memcache_pool_unused_timeout = 60
Oct 09 16:45:19 compute-0 nova_compute[117331]:   memcache_sasl_enabled = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   memcache_servers = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     localhost:11211
Oct 09 16:45:19 compute-0 nova_compute[117331]:   memcache_socket_timeout = 1.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   memcache_username = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   proxies = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   redis_db = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   redis_password = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   redis_sentinel_service_name = mymaster
Oct 09 16:45:19 compute-0 nova_compute[117331]:   redis_sentinels = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     localhost:26379
Oct 09 16:45:19 compute-0 nova_compute[117331]:   redis_server = localhost:6379
Oct 09 16:45:19 compute-0 nova_compute[117331]:   redis_socket_timeout = 1.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   redis_username = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   retry_attempts = 2
Oct 09 16:45:19 compute-0 nova_compute[117331]:   retry_delay = 0.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   socket_keepalive_count = 1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   socket_keepalive_idle = 1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   socket_keepalive_interval = 1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   tls_allowed_ciphers = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   tls_cafile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   tls_certfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   tls_enabled = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   tls_keyfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: cinder: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_section = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_type = password
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cafile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   catalog_info = volumev3:cinderv3:internalURL
Oct 09 16:45:19 compute-0 nova_compute[117331]:   certfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   collect-timing = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cross_az_attach = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   debug = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   endpoint_template = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   http_retries = 3
Oct 09 16:45:19 compute-0 nova_compute[117331]:   insecure = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   keyfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   os_region_name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   split-loggers = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   timeout = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: compute: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   consecutive_build_service_disable_threshold = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cpu_dedicated_set = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cpu_shared_set = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   image_type_exclude_list = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   live_migration_wait_for_vif_plug = True
Oct 09 16:45:19 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_concurrent_disk_ops = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_disk_devices_to_attach = -1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   packing_host_numa_cells_allocation_strategy = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   provider_config_location = /etc/nova/provider_config/
Oct 09 16:45:19 compute-0 nova_compute[117331]:   resource_provider_association_refresh = 300
Oct 09 16:45:19 compute-0 nova_compute[117331]:   sharing_providers_max_uuids_per_request = 200
Oct 09 16:45:19 compute-0 nova_compute[117331]:   shutdown_retry_interval = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vmdk_allowed_types = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     monolithicSparse
Oct 09 16:45:19 compute-0 nova_compute[117331]:     streamOptimized
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: conductor: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   workers = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: console: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   allowed_origins = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ssl_ciphers = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ssl_minimum_version = default
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: consoleauth: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enforce_session_timeout = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   token_ttl = 600
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: cyborg: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cafile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   certfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   collect-timing = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connect-retries = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connect-retry-delay = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   endpoint-override = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   insecure = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   keyfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   min_version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   region-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   retriable-status-codes = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   service-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   service-type = accelerator
Oct 09 16:45:19 compute-0 nova_compute[117331]:   split-loggers = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   status-code-retries = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   status-code-retry-delay = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   timeout = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   valid-interfaces = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     internal
Oct 09 16:45:19 compute-0 nova_compute[117331]:     public
Oct 09 16:45:19 compute-0 nova_compute[117331]:   version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: database: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   asyncio_connection = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   asyncio_slave_connection = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   backend = sqlalchemy
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connection = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connection_debug = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connection_parameters = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connection_recycle_time = 3600
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connection_trace = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   db_inc_retry_interval = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   db_max_retries = 20
Oct 09 16:45:19 compute-0 nova_compute[117331]:   db_max_retry_interval = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   db_retry_interval = 1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_overflow = 50
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_pool_size = 5
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_retries = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   mysql_sql_mode = TRADITIONAL
Oct 09 16:45:19 compute-0 nova_compute[117331]:   mysql_wsrep_sync_wait = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   pool_timeout = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   retry_interval = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   slave_connection = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   sqlite_synchronous = True
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: default: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   allow_resize_to_same_host = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   arq_binding_timeout = 300
Oct 09 16:45:19 compute-0 nova_compute[117331]:   backdoor_port = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   backdoor_socket = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   block_device_allocate_retries = 60
Oct 09 16:45:19 compute-0 nova_compute[117331]:   block_device_allocate_retries_interval = 3
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cell_worker_thread_pool_size = 5
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cert = self.pem
Oct 09 16:45:19 compute-0 nova_compute[117331]:   compute_driver = libvirt.LibvirtDriver
Oct 09 16:45:19 compute-0 nova_compute[117331]:   compute_monitors = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   config-dir = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     /etc/nova/nova.conf.d
Oct 09 16:45:19 compute-0 nova_compute[117331]:   config-file = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     /etc/nova/nova-compute.conf
Oct 09 16:45:19 compute-0 nova_compute[117331]:     /etc/nova/nova.conf
Oct 09 16:45:19 compute-0 nova_compute[117331]:   config_drive_format = iso9660
Oct 09 16:45:19 compute-0 nova_compute[117331]:   config_source = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   console_host = compute-0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   control_exchange = nova
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cpu_allocation_ratio = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   daemon = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   debug = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default_access_ip_network_name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default_availability_zone = nova
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default_ephemeral_format = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default_green_pool_size = 1000
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default_log_levels = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     amqp=WARN
Oct 09 16:45:19 compute-0 nova_compute[117331]:     amqplib=WARN
Oct 09 16:45:19 compute-0 nova_compute[117331]:     boto=WARN
Oct 09 16:45:19 compute-0 nova_compute[117331]:     dogpile.core.dogpile=INFO
Oct 09 16:45:19 compute-0 nova_compute[117331]:     glanceclient=WARN
Oct 09 16:45:19 compute-0 nova_compute[117331]:     iso8601=WARN
Oct 09 16:45:19 compute-0 nova_compute[117331]:     keystoneauth=WARN
Oct 09 16:45:19 compute-0 nova_compute[117331]:     keystonemiddleware=WARN
Oct 09 16:45:19 compute-0 nova_compute[117331]:     oslo.cache=INFO
Oct 09 16:45:19 compute-0 nova_compute[117331]:     oslo.messaging=INFO
Oct 09 16:45:19 compute-0 nova_compute[117331]:     oslo.privsep.daemon=INFO
Oct 09 16:45:19 compute-0 nova_compute[117331]:     oslo_messaging=INFO
Oct 09 16:45:19 compute-0 nova_compute[117331]:     oslo_policy=INFO
Oct 09 16:45:19 compute-0 nova_compute[117331]:     qpid=WARN
Oct 09 16:45:19 compute-0 nova_compute[117331]:     requests.packages.urllib3.connectionpool=WARN
Oct 09 16:45:19 compute-0 nova_compute[117331]:     requests.packages.urllib3.util.retry=WARN
Oct 09 16:45:19 compute-0 nova_compute[117331]:     routes.middleware=WARN
Oct 09 16:45:19 compute-0 nova_compute[117331]:     sqlalchemy=WARN
Oct 09 16:45:19 compute-0 nova_compute[117331]:     stevedore=WARN
Oct 09 16:45:19 compute-0 nova_compute[117331]:     suds=INFO
Oct 09 16:45:19 compute-0 nova_compute[117331]:     taskflow=WARN
Oct 09 16:45:19 compute-0 nova_compute[117331]:     urllib3.connectionpool=WARN
Oct 09 16:45:19 compute-0 nova_compute[117331]:     urllib3.util.retry=WARN
Oct 09 16:45:19 compute-0 nova_compute[117331]:     websocket=WARN
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default_schedule_zone = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default_thread_pool_size = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   disk_allocation_ratio = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enable_new_services = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   executor_thread_pool_size = 64
Oct 09 16:45:19 compute-0 nova_compute[117331]:   fatal_deprecations = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   flat_injected = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   force_config_drive = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   force_raw_images = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   graceful_shutdown_timeout = 60
Oct 09 16:45:19 compute-0 nova_compute[117331]:   heal_instance_info_cache_interval = -1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   host = compute-0.ctlplane.example.com
Oct 09 16:45:19 compute-0 nova_compute[117331]:   initial_cpu_allocation_ratio = 4.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   initial_disk_allocation_ratio = 0.9
Oct 09 16:45:19 compute-0 nova_compute[117331]:   initial_ram_allocation_ratio = 1.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   injected_network_template = /usr/lib/python3.12/site-packages/nova/virt/interfaces.template
Oct 09 16:45:19 compute-0 nova_compute[117331]:   instance_build_timeout = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   instance_delete_interval = 300
Oct 09 16:45:19 compute-0 nova_compute[117331]:   instance_format = [instance: %(uuid)s] 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   instance_name_template = instance-%08x
Oct 09 16:45:19 compute-0 nova_compute[117331]:   instance_usage_audit = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   instance_usage_audit_period = month
Oct 09 16:45:19 compute-0 nova_compute[117331]:   instance_uuid_format = [instance: %(uuid)s] 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   instances_path = /var/lib/nova/instances
Oct 09 16:45:19 compute-0 nova_compute[117331]:   internal_service_availability_zone = internal
Oct 09 16:45:19 compute-0 nova_compute[117331]:   key = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   live_migration_retry_count = 30
Oct 09 16:45:19 compute-0 nova_compute[117331]:   log-config-append = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   log-date-format = %Y-%m-%d %H:%M:%S
Oct 09 16:45:19 compute-0 nova_compute[117331]:   log-dir = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   log-file = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   log_color = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   log_options = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   log_rotate_interval = 1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   log_rotate_interval_type = days
Oct 09 16:45:19 compute-0 nova_compute[117331]:   log_rotation_type = size
Oct 09 16:45:19 compute-0 nova_compute[117331]:   logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
Oct 09 16:45:19 compute-0 nova_compute[117331]:   logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
Oct 09 16:45:19 compute-0 nova_compute[117331]:   logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
Oct 09 16:45:19 compute-0 nova_compute[117331]:   logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
Oct 09 16:45:19 compute-0 nova_compute[117331]:   logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s
Oct 09 16:45:19 compute-0 nova_compute[117331]:   long_rpc_timeout = 1800
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_concurrent_builds = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_concurrent_live_migrations = 1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_concurrent_snapshots = 5
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_local_block_devices = 3
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_logfile_count = 1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_logfile_size_mb = 20
Oct 09 16:45:19 compute-0 nova_compute[117331]:   maximum_instance_delete_attempts = 5
Oct 09 16:45:19 compute-0 nova_compute[117331]:   migrate_max_retries = -1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   mkisofs_cmd = /usr/bin/mkisofs
Oct 09 16:45:19 compute-0 nova_compute[117331]:   my_block_storage_ip = 192.168.122.100
Oct 09 16:45:19 compute-0 nova_compute[117331]:   my_ip = 192.168.122.100
Oct 09 16:45:19 compute-0 nova_compute[117331]:   my_shared_fs_storage_ip = 192.168.122.100
Oct 09 16:45:19 compute-0 nova_compute[117331]:   network_allocate_retries = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   non_inheritable_image_properties = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     bittorrent
Oct 09 16:45:19 compute-0 nova_compute[117331]:     cache_in_nova
Oct 09 16:45:19 compute-0 nova_compute[117331]:   osapi_compute_unique_server_name_scope = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   password_length = 12
Oct 09 16:45:19 compute-0 nova_compute[117331]:   periodic_enable = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   periodic_fuzzy_delay = 60
Oct 09 16:45:19 compute-0 nova_compute[117331]:   pointer_model = usbtablet
Oct 09 16:45:19 compute-0 nova_compute[117331]:   preallocate_images = none
Oct 09 16:45:19 compute-0 nova_compute[117331]:   publish_errors = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   pybasedir = /usr/lib/python3.12/site-packages
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ram_allocation_ratio = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rate_limit_burst = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rate_limit_except_level = CRITICAL
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rate_limit_interval = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   reboot_timeout = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   reclaim_instance_interval = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   record = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   reimage_timeout_per_gb = 20
Oct 09 16:45:19 compute-0 nova_compute[117331]:   report_interval = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rescue_timeout = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   reserved_host_cpus = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   reserved_host_disk_mb = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   reserved_host_memory_mb = 512
Oct 09 16:45:19 compute-0 nova_compute[117331]:   reserved_huge_pages = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   resize_confirm_window = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   resize_fs_using_block_device = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   resume_guests_state_on_host_boot = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rootwrap_config = /etc/nova/rootwrap.conf
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rpc_ping_enabled = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rpc_response_timeout = 60
Oct 09 16:45:19 compute-0 nova_compute[117331]:   run_external_periodic_tasks = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   running_deleted_instance_action = reap
Oct 09 16:45:19 compute-0 nova_compute[117331]:   running_deleted_instance_poll_interval = 1800
Oct 09 16:45:19 compute-0 nova_compute[117331]:   running_deleted_instance_timeout = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   scheduler_instance_sync_interval = 120
Oct 09 16:45:19 compute-0 nova_compute[117331]:   service_down_time = 60
Oct 09 16:45:19 compute-0 nova_compute[117331]:   servicegroup_driver = db
Oct 09 16:45:19 compute-0 nova_compute[117331]:   shell_completion = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   shelved_offload_time = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   shelved_poll_interval = 3600
Oct 09 16:45:19 compute-0 nova_compute[117331]:   shutdown_timeout = 60
Oct 09 16:45:19 compute-0 nova_compute[117331]:   source_is_ipv6 = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ssl_only = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   state_path = /var/lib/nova
Oct 09 16:45:19 compute-0 nova_compute[117331]:   sync_power_state_interval = 600
Oct 09 16:45:19 compute-0 nova_compute[117331]:   sync_power_state_pool_size = 1000
Oct 09 16:45:19 compute-0 nova_compute[117331]:   syslog-log-facility = LOG_USER
Oct 09 16:45:19 compute-0 nova_compute[117331]:   tempdir = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   thread_pool_statistic_period = -1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   timeout_nbd = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   transport_url = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   update_resources_interval = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   use-journal = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   use-json = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   use-syslog = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   use_cow_images = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   use_rootwrap_daemon = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   use_stderr = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vcpu_pin_set = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vif_plugging_is_fatal = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vif_plugging_timeout = 300
Oct 09 16:45:19 compute-0 nova_compute[117331]:   virt_mkfs = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   volume_usage_poll_interval = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   watch-log-file = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   web = /usr/share/spice-html5
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: devices: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enabled_mdev_types = 
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ephemeral_storage_encryption: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cipher = aes-xts-plain64
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default_format = luks
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enabled = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   key_size = 512
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: filter_scheduler: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   aggregate_image_properties_isolation_namespace = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   aggregate_image_properties_isolation_separator = .
Oct 09 16:45:19 compute-0 nova_compute[117331]:   available_filters = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     nova.scheduler.filters.all_filters
Oct 09 16:45:19 compute-0 nova_compute[117331]:   build_failure_weight_multiplier = 1000000.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cpu_weight_multiplier = 1.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cross_cell_move_weight_multiplier = 1000000.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   disk_weight_multiplier = 1.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enabled_filters = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     ComputeCapabilitiesFilter
Oct 09 16:45:19 compute-0 nova_compute[117331]:     ComputeFilter
Oct 09 16:45:19 compute-0 nova_compute[117331]:     ImagePropertiesFilter
Oct 09 16:45:19 compute-0 nova_compute[117331]:     ServerGroupAffinityFilter
Oct 09 16:45:19 compute-0 nova_compute[117331]:     ServerGroupAntiAffinityFilter
Oct 09 16:45:19 compute-0 nova_compute[117331]:   host_subset_size = 1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   hypervisor_version_weight_multiplier = 1.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   image_properties_default_architecture = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   image_props_weight_multiplier = 0.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   image_props_weight_setting = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   io_ops_weight_multiplier = -1.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   isolated_hosts = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   isolated_images = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_instances_per_host = 50
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_io_ops_per_host = 8
Oct 09 16:45:19 compute-0 nova_compute[117331]:   num_instances_weight_multiplier = 0.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   pci_in_placement = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   pci_weight_multiplier = 1.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ram_weight_multiplier = 1.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   restrict_isolated_hosts_to_isolated_images = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   shuffle_best_same_weighed_hosts = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   soft_affinity_weight_multiplier = 1.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   soft_anti_affinity_weight_multiplier = 1.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   track_instance_changes = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   weight_classes = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     nova.scheduler.weights.all_weighers
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: glance: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   api_servers = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cafile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   certfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   collect-timing = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connect-retries = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connect-retry-delay = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   debug = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default_trusted_certificate_ids = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enable_certificate_validation = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enable_rbd_download = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   endpoint-override = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   insecure = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   keyfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   min_version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   num_retries = 3
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rbd_ceph_conf = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rbd_connect_timeout = 5
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rbd_pool = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rbd_user = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   region-name = regionOne
Oct 09 16:45:19 compute-0 nova_compute[117331]:   retriable-status-codes = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   service-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   service-type = image
Oct 09 16:45:19 compute-0 nova_compute[117331]:   split-loggers = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   status-code-retries = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   status-code-retry-delay = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   timeout = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   valid-interfaces = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     internal
Oct 09 16:45:19 compute-0 nova_compute[117331]:   verify_glance_signatures = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: guestfs: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   debug = False
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: image_cache: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   manager_interval = 2400
Oct 09 16:45:19 compute-0 nova_compute[117331]: 2025-10-09 16:45:19.418 2 DEBUG nova.virt.libvirt.guest [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] attach device xml: <disk type="file" device="disk">
Oct 09 16:45:19 compute-0 nova_compute[117331]:   <driver name="qemu" type="raw" cache="none" io="native"/>
Oct 09 16:45:19 compute-0 nova_compute[117331]:   <alias name="ua-e877a3bb-b53d-4cbb-b2a6-4c88f284da4f"/>
Oct 09 16:45:19 compute-0 nova_compute[117331]:   <source file="/var/lib/nova/mnt/56491fbd7607c943b28b644cf81cf3ff/volume-e877a3bb-b53d-4cbb-b2a6-4c88f284da4f"/>
Oct 09 16:45:19 compute-0 nova_compute[117331]:   <target dev="vdb" bus="virtio"/>
Oct 09 16:45:19 compute-0 nova_compute[117331]:   <serial>e877a3bb-b53d-4cbb-b2a6-4c88f284da4f</serial>
Oct 09 16:45:19 compute-0 nova_compute[117331]: </disk>
Oct 09 16:45:19 compute-0 nova_compute[117331]:  attach_device /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:336
Oct 09 16:45:19 compute-0 nova_compute[117331]:   precache_concurrency = 1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   remove_unused_base_images = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   remove_unused_original_minimum_age_seconds = 86400
Oct 09 16:45:19 compute-0 nova_compute[117331]:   remove_unused_resized_minimum_age_seconds = 3600
Oct 09 16:45:19 compute-0 nova_compute[117331]:   subdirectory_name = _base
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: ironic: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   api_max_retries = 60
Oct 09 16:45:19 compute-0 nova_compute[117331]:   api_retry_interval = 2
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_section = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_type = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cafile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   certfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   collect-timing = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   conductor_group = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connect-retries = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connect-retry-delay = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   endpoint-override = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   insecure = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   keyfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   min_version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   peer_list = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   region-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   retriable-status-codes = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   serial_console_state_timeout = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   service-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   service-type = baremetal
Oct 09 16:45:19 compute-0 nova_compute[117331]:   shard = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   split-loggers = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   status-code-retries = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   status-code-retry-delay = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   timeout = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   valid-interfaces = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     internal
Oct 09 16:45:19 compute-0 nova_compute[117331]:     public
Oct 09 16:45:19 compute-0 nova_compute[117331]:   version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: key_manager: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   backend = barbican
Oct 09 16:45:19 compute-0 nova_compute[117331]:   fixed_key = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: keystone: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cafile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   certfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   collect-timing = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connect-retries = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connect-retry-delay = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   endpoint-override = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   insecure = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   keyfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   min_version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   region-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   retriable-status-codes = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   service-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   service-type = identity
Oct 09 16:45:19 compute-0 nova_compute[117331]:   split-loggers = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   status-code-retries = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   status-code-retry-delay = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   timeout = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   valid-interfaces = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     internal
Oct 09 16:45:19 compute-0 nova_compute[117331]:     public
Oct 09 16:45:19 compute-0 nova_compute[117331]:   version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: libvirt: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ceph_mount_options = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ceph_mount_point_base = /var/lib/nova/mnt
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connection_uri = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cpu_mode = host-model
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cpu_model_extra_flags = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cpu_models = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cpu_power_governor_high = performance
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cpu_power_governor_low = powersave
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cpu_power_management = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cpu_power_management_strategy = cpu_state
Oct 09 16:45:19 compute-0 nova_compute[117331]:   device_detach_attempts = 8
Oct 09 16:45:19 compute-0 nova_compute[117331]:   device_detach_timeout = 20
Oct 09 16:45:19 compute-0 nova_compute[117331]:   disk_cachemodes = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   disk_prefix = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enabled_perf_events = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   file_backed_memory = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   gid_maps = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   hw_disk_discard = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   hw_machine_type = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     x86_64=q35
Oct 09 16:45:19 compute-0 nova_compute[117331]:   images_rbd_ceph_conf = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   images_rbd_glance_copy_poll_interval = 15
Oct 09 16:45:19 compute-0 nova_compute[117331]:   images_rbd_glance_copy_timeout = 600
Oct 09 16:45:19 compute-0 nova_compute[117331]:   images_rbd_glance_store_name = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   images_rbd_pool = rbd
Oct 09 16:45:19 compute-0 nova_compute[117331]:   images_type = qcow2
Oct 09 16:45:19 compute-0 nova_compute[117331]:   images_volume_group = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   inject_key = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   inject_partition = -2
Oct 09 16:45:19 compute-0 nova_compute[117331]:   inject_password = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   iscsi_iface = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   iser_use_multipath = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   live_migration_bandwidth = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   live_migration_completion_timeout = 800
Oct 09 16:45:19 compute-0 nova_compute[117331]:   live_migration_downtime = 500
Oct 09 16:45:19 compute-0 nova_compute[117331]:   live_migration_downtime_delay = 75
Oct 09 16:45:19 compute-0 nova_compute[117331]:   live_migration_downtime_steps = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   live_migration_inbound_addr = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   live_migration_permit_auto_converge = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   live_migration_permit_post_copy = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   live_migration_scheme = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   live_migration_timeout_action = force_complete
Oct 09 16:45:19 compute-0 nova_compute[117331]:   live_migration_tunnelled = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   live_migration_uri = qemu+tls://%s/system
Oct 09 16:45:19 compute-0 nova_compute[117331]:   live_migration_with_native_tls = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_queues = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   mem_stats_period_seconds = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   migration_inbound_addr = 192.168.122.100
Oct 09 16:45:19 compute-0 nova_compute[117331]:   nfs_mount_options = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   nfs_mount_point_base = /var/lib/nova/mnt
Oct 09 16:45:19 compute-0 nova_compute[117331]:   num_aoe_discover_tries = 3
Oct 09 16:45:19 compute-0 nova_compute[117331]:   num_iser_scan_tries = 5
Oct 09 16:45:19 compute-0 nova_compute[117331]:   num_memory_encrypted_guests = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   num_nvme_discover_tries = 5
Oct 09 16:45:19 compute-0 nova_compute[117331]:   num_pcie_ports = 24
Oct 09 16:45:19 compute-0 nova_compute[117331]:   num_volume_scan_tries = 5
Oct 09 16:45:19 compute-0 nova_compute[117331]:   pmem_namespaces = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   quobyte_client_cfg = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   quobyte_mount_point_base = /var/lib/nova/mnt
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rbd_connect_timeout = 5
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rbd_destroy_volume_retries = 12
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rbd_destroy_volume_retry_interval = 5
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rbd_secret_uuid = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rbd_user = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   realtime_scheduler_priority = 1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   remote_filesystem_transport = ssh
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rescue_image_id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rescue_kernel_id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rescue_ramdisk_id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rng_dev_path = /dev/urandom
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rx_queue_size = 512
Oct 09 16:45:19 compute-0 nova_compute[117331]:   smbfs_mount_options = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   smbfs_mount_point_base = /var/lib/nova/mnt
Oct 09 16:45:19 compute-0 nova_compute[117331]:   snapshot_compression = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   snapshot_image_format = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   snapshots_directory = /var/lib/nova/instances/snapshots
Oct 09 16:45:19 compute-0 nova_compute[117331]:   sparse_logical_volumes = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   swtpm_enabled = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   swtpm_group = tss
Oct 09 16:45:19 compute-0 nova_compute[117331]:   swtpm_user = tss
Oct 09 16:45:19 compute-0 nova_compute[117331]:   sysinfo_serial = unique
Oct 09 16:45:19 compute-0 nova_compute[117331]:   tb_cache_size = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   tx_queue_size = 512
Oct 09 16:45:19 compute-0 nova_compute[117331]:   uid_maps = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   use_virtio_for_bridges = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   virt_type = kvm
Oct 09 16:45:19 compute-0 nova_compute[117331]:   volume_clear = zero
Oct 09 16:45:19 compute-0 nova_compute[117331]:   volume_clear_size = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   volume_enforce_multipath = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   volume_use_multipath = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vzstorage_cache_path = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vzstorage_mount_group = qemu
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vzstorage_mount_opts = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vzstorage_mount_perms = 0770
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vzstorage_mount_point_base = /var/lib/nova/mnt
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vzstorage_mount_user = stack
Oct 09 16:45:19 compute-0 nova_compute[117331]:   wait_soft_reboot_seconds = 120
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: manila: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_section = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_type = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cafile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   certfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   collect-timing = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connect-retries = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connect-retry-delay = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   endpoint-override = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   insecure = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   keyfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   min_version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   region-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   retriable-status-codes = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   service-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   service-type = shared-file-system
Oct 09 16:45:19 compute-0 nova_compute[117331]:   share_apply_policy_timeout = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   split-loggers = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   status-code-retries = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   status-code-retry-delay = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   timeout = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   valid-interfaces = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     internal
Oct 09 16:45:19 compute-0 nova_compute[117331]:     public
Oct 09 16:45:19 compute-0 nova_compute[117331]:   version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: metrics: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   required = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   weight_multiplier = 1.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   weight_of_unavailable = -10000.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   weight_setting = 
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: mks: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enabled = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   mksproxy_base_url = http://127.0.0.1:6090/
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: neutron: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth-url = https://keystone-internal.openstack.svc:5000
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_section = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_type = password
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cafile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   certfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   collect-timing = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connect-retries = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connect-retry-delay = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default-domain-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default-domain-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default_floating_pool = nova
Oct 09 16:45:19 compute-0 nova_compute[117331]:   domain-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   domain-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   endpoint-override = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   extension_sync_interval = 600
Oct 09 16:45:19 compute-0 nova_compute[117331]:   http_retries = 3
Oct 09 16:45:19 compute-0 nova_compute[117331]:   insecure = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   keyfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   metadata_proxy_shared_secret = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   min_version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ovs_bridge = br-int
Oct 09 16:45:19 compute-0 nova_compute[117331]:   password = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   physnets = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   project-domain-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   project-domain-name = Default
Oct 09 16:45:19 compute-0 nova_compute[117331]:   project-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   project-name = service
Oct 09 16:45:19 compute-0 nova_compute[117331]:   region-name = regionOne
Oct 09 16:45:19 compute-0 nova_compute[117331]:   retriable-status-codes = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   service-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   service-type = network
Oct 09 16:45:19 compute-0 nova_compute[117331]:   service_metadata_proxy = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   split-loggers = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   status-code-retries = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   status-code-retry-delay = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   system-scope = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   timeout = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   trust-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   user-domain-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   user-domain-name = Default
Oct 09 16:45:19 compute-0 nova_compute[117331]:   user-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   username = nova
Oct 09 16:45:19 compute-0 nova_compute[117331]:   valid-interfaces = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     internal
Oct 09 16:45:19 compute-0 nova_compute[117331]:   version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: neutron_tunnel: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   numa_nodes = 
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: notifications: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   bdms_in_notifications = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default_level = INFO
Oct 09 16:45:19 compute-0 nova_compute[117331]:   include_share_mapping = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   notification_format = both
Oct 09 16:45:19 compute-0 nova_compute[117331]:   notify_on_state_change = vm_and_task_state
Oct 09 16:45:19 compute-0 nova_compute[117331]:   versioned_notifications_topics = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     versioned_notifications
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: nova_sys_admin: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   capabilities = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     0
Oct 09 16:45:19 compute-0 nova_compute[117331]:     1
Oct 09 16:45:19 compute-0 nova_compute[117331]:     12
Oct 09 16:45:19 compute-0 nova_compute[117331]:     2
Oct 09 16:45:19 compute-0 nova_compute[117331]:     21
Oct 09 16:45:19 compute-0 nova_compute[117331]:     3
Oct 09 16:45:19 compute-0 nova_compute[117331]:   group = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   helper_command = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   log_daemon_traceback = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   logger_name = oslo_privsep.daemon
Oct 09 16:45:19 compute-0 nova_compute[117331]:   thread_pool_size = 8
Oct 09 16:45:19 compute-0 nova_compute[117331]:   user = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: os_brick: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   lock_path = /var/lib/nova/tmp
Oct 09 16:45:19 compute-0 nova_compute[117331]:   wait_mpath_device_attempts = 4
Oct 09 16:45:19 compute-0 nova_compute[117331]:   wait_mpath_device_interval = 1
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: os_vif_linux_bridge: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   flat_interface = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   forward_bridge_interface = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     all
Oct 09 16:45:19 compute-0 nova_compute[117331]:   iptables_bottom_regex = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   iptables_drop_action = DROP
Oct 09 16:45:19 compute-0 nova_compute[117331]:   iptables_top_regex = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   network_device_mtu = 1500
Oct 09 16:45:19 compute-0 nova_compute[117331]:   use_ipv6 = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vlan_interface = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: os_vif_ovs: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default_qos_type = linux-noop
Oct 09 16:45:19 compute-0 nova_compute[117331]:   isolate_vif = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   network_device_mtu = 1500
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ovs_vsctl_timeout = 120
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ovsdb_connection = tcp:127.0.0.1:6640
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ovsdb_interface = native
Oct 09 16:45:19 compute-0 nova_compute[117331]:   per_port_bridge = False
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: oslo_concurrency: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   disable_process_locking = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   lock_path = /var/lib/nova/tmp
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: oslo_limit: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth-url = https://keystone-internal.openstack.svc:5000
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_section = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_type = password
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cafile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   certfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   collect-timing = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connect-retries = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connect-retry-delay = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default-domain-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default-domain-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   domain-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   domain-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   endpoint-override = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   endpoint_id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   endpoint_interface = internal
Oct 09 16:45:19 compute-0 nova_compute[117331]:   endpoint_region_name = regionOne
Oct 09 16:45:19 compute-0 nova_compute[117331]:   endpoint_service_name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   endpoint_service_type = compute
Oct 09 16:45:19 compute-0 nova_compute[117331]:   insecure = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   keyfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max-version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   min-version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   password = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   project-domain-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   project-domain-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   project-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   project-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   region-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   retriable-status-codes = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   service-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   service-type = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   split-loggers = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   status-code-retries = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   status-code-retry-delay = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   system-scope = all
Oct 09 16:45:19 compute-0 nova_compute[117331]:   timeout = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   trust-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   user-domain-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   user-domain-name = Default
Oct 09 16:45:19 compute-0 nova_compute[117331]:   user-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   username = nova
Oct 09 16:45:19 compute-0 nova_compute[117331]:   valid-interfaces = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: oslo_messaging_metrics: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   metrics_buffer_size = 1000
Oct 09 16:45:19 compute-0 nova_compute[117331]:   metrics_enabled = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   metrics_process_name = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   metrics_socket_file = /var/tmp/metrics_collector.sock
Oct 09 16:45:19 compute-0 nova_compute[117331]:   metrics_thread_stop_timeout = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: oslo_messaging_notifications: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   driver = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     messagingv2
Oct 09 16:45:19 compute-0 nova_compute[117331]:   retry = -1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   topics = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     notifications
Oct 09 16:45:19 compute-0 nova_compute[117331]:   transport_url = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: oslo_messaging_rabbit: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   amqp_auto_delete = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   amqp_durable_queues = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   conn_pool_min_size = 2
Oct 09 16:45:19 compute-0 nova_compute[117331]:   conn_pool_ttl = 1200
Oct 09 16:45:19 compute-0 nova_compute[117331]:   direct_mandatory_flag = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enable_cancel_on_failover = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   heartbeat_in_pthread = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   heartbeat_rate = 3
Oct 09 16:45:19 compute-0 nova_compute[117331]:   heartbeat_timeout_threshold = 60
Oct 09 16:45:19 compute-0 nova_compute[117331]:   hostname = compute-0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   kombu_compression = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   kombu_failover_strategy = round-robin
Oct 09 16:45:19 compute-0 nova_compute[117331]:   kombu_missing_consumer_retry_timeout = 60
Oct 09 16:45:19 compute-0 nova_compute[117331]:   kombu_reconnect_delay = 1.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   kombu_reconnect_splay = 0.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   processname = nova-compute
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rabbit_ha_queues = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rabbit_interval_max = 30
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rabbit_login_method = AMQPLAIN
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rabbit_qos_prefetch_count = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rabbit_quorum_delivery_limit = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rabbit_quorum_max_memory_bytes = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rabbit_quorum_max_memory_length = 0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rabbit_quorum_queue = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rabbit_retry_backoff = 2
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rabbit_retry_interval = 1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rabbit_stream_fanout = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rabbit_transient_queues_ttl = 1800
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rabbit_transient_quorum_queue = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   rpc_conn_pool_size = 30
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ssl = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ssl_ca_file = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ssl_cert_file = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ssl_enforce_fips_mode = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ssl_key_file = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ssl_version = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   use_queue_manager = False
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: oslo_middleware: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   http_basic_auth_user_file = /etc/htpasswd
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: oslo_policy: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enforce_new_defaults = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enforce_scope = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   policy_default_rule = default
Oct 09 16:45:19 compute-0 nova_compute[117331]:   policy_dirs = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     policy.d
Oct 09 16:45:19 compute-0 nova_compute[117331]:   policy_file = policy.yaml
Oct 09 16:45:19 compute-0 nova_compute[117331]:   remote_content_type = application/x-www-form-urlencoded
Oct 09 16:45:19 compute-0 nova_compute[117331]:   remote_ssl_ca_crt_file = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   remote_ssl_client_crt_file = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   remote_ssl_client_key_file = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   remote_ssl_verify_server_crt = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   remote_timeout = 60.0
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: oslo_reports: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   file_event_handler = /var/lib/nova
Oct 09 16:45:19 compute-0 nova_compute[117331]:   file_event_handler_interval = 1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   log_dir = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: oslo_versionedobjects: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   fatal_exception_format_errors = False
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: pci: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   alias = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   device_spec = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   report_in_placement = False
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: placement: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth-url = https://keystone-internal.openstack.svc:5000
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_section = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_type = password
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cafile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   certfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   collect-timing = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connect-retries = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connect-retry-delay = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default-domain-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default-domain-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   domain-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   domain-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   endpoint-override = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   insecure = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   keyfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   min_version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   password = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   project-domain-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   project-domain-name = Default
Oct 09 16:45:19 compute-0 nova_compute[117331]:   project-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   project-name = service
Oct 09 16:45:19 compute-0 nova_compute[117331]:   region-name = regionOne
Oct 09 16:45:19 compute-0 nova_compute[117331]:   retriable-status-codes = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   service-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   service-type = placement
Oct 09 16:45:19 compute-0 nova_compute[117331]:   split-loggers = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   status-code-retries = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   status-code-retry-delay = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   system-scope = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   timeout = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   trust-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   user-domain-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   user-domain-name = Default
Oct 09 16:45:19 compute-0 nova_compute[117331]:   user-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   username = nova
Oct 09 16:45:19 compute-0 nova_compute[117331]:   valid-interfaces = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     internal
Oct 09 16:45:19 compute-0 nova_compute[117331]:   version = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: privsep_osbrick: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   capabilities = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     2
Oct 09 16:45:19 compute-0 nova_compute[117331]:     21
Oct 09 16:45:19 compute-0 nova_compute[117331]:   group = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   helper_command = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   log_daemon_traceback = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   logger_name = os_brick.privileged
Oct 09 16:45:19 compute-0 nova_compute[117331]:   thread_pool_size = 8
Oct 09 16:45:19 compute-0 nova_compute[117331]:   user = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: quota: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cores = 20
Oct 09 16:45:19 compute-0 nova_compute[117331]:   count_usage_from_placement = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   driver = nova.quota.DbQuotaDriver
Oct 09 16:45:19 compute-0 nova_compute[117331]:   injected_file_content_bytes = 10240
Oct 09 16:45:19 compute-0 nova_compute[117331]:   injected_file_path_length = 255
Oct 09 16:45:19 compute-0 nova_compute[117331]:   injected_files = 5
Oct 09 16:45:19 compute-0 nova_compute[117331]:   instances = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   key_pairs = 100
Oct 09 16:45:19 compute-0 nova_compute[117331]:   metadata_items = 128
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ram = 51200
Oct 09 16:45:19 compute-0 nova_compute[117331]:   recheck_quota = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   server_group_members = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   server_groups = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   unified_limits_resource_list = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     servers
Oct 09 16:45:19 compute-0 nova_compute[117331]:   unified_limits_resource_strategy = require
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: scheduler: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   discover_hosts_in_cells_interval = -1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enable_isolated_aggregate_filtering = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   image_metadata_prefilter = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   limit_tenants_to_placement_aggregate = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_attempts = 3
Oct 09 16:45:19 compute-0 nova_compute[117331]:   max_placement_results = 1000
Oct 09 16:45:19 compute-0 nova_compute[117331]:   placement_aggregate_required_for_tenants = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   query_placement_for_image_type_support = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   query_placement_for_routed_network_aggregates = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   workers = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: serial_console: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   base_url = ws://127.0.0.1:6083/
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enabled = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   port_range = 10000:20000
Oct 09 16:45:19 compute-0 nova_compute[117331]:   proxyclient_address = 127.0.0.1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   serialproxy_host = 0.0.0.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   serialproxy_port = 6083
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: service_user: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth-url = https://keystone-internal.openstack.svc:5000
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_section = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_type = password
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cafile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   certfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   collect-timing = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default-domain-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   default-domain-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   domain-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   domain-name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   insecure = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   keyfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   password = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   project-domain-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   project-domain-name = Default
Oct 09 16:45:19 compute-0 nova_compute[117331]:   project-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   project-name = service
Oct 09 16:45:19 compute-0 nova_compute[117331]:   send_service_user_token = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   split-loggers = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   system-scope = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   timeout = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   trust-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   user-domain-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   user-domain-name = Default
Oct 09 16:45:19 compute-0 nova_compute[117331]:   user-id = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   username = nova
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: spice: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   agent_enabled = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enabled = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html
Oct 09 16:45:19 compute-0 nova_compute[117331]:   html5proxy_host = 0.0.0.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   html5proxy_port = 6082
Oct 09 16:45:19 compute-0 nova_compute[117331]:   image_compression = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   jpeg_compression = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   playback_compression = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   require_secure = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   server_listen = 127.0.0.1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   server_proxyclient_address = 127.0.0.1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   spice_direct_proxy_base_url = http://127.0.0.1:13002/nova
Oct 09 16:45:19 compute-0 nova_compute[117331]:   streaming_mode = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   zlib_compression = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: upgrade_levels: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   baseapi = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   compute = auto
Oct 09 16:45:19 compute-0 nova_compute[117331]:   conductor = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   scheduler = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: vault: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   approle_role_id = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   approle_secret_id = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   kv_mountpoint = secret
Oct 09 16:45:19 compute-0 nova_compute[117331]:   kv_path = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   kv_version = 2
Oct 09 16:45:19 compute-0 nova_compute[117331]:   namespace = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   root_token_id = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ssl_ca_crt_file = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   timeout = 60.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   use_ssl = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vault_url = http://127.0.0.1:8200
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: vendordata_dynamic_auth: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_section = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_type = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cafile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   certfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   collect-timing = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   insecure = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   keyfile = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   split-loggers = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   timeout = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: vif_plug_linux_bridge_privileged: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   capabilities = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     12
Oct 09 16:45:19 compute-0 nova_compute[117331]:   group = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   helper_command = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   log_daemon_traceback = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   logger_name = oslo_privsep.daemon
Oct 09 16:45:19 compute-0 nova_compute[117331]:   thread_pool_size = 8
Oct 09 16:45:19 compute-0 nova_compute[117331]:   user = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: vif_plug_ovs_privileged: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   capabilities = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     1
Oct 09 16:45:19 compute-0 nova_compute[117331]:     12
Oct 09 16:45:19 compute-0 nova_compute[117331]:   group = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   helper_command = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   log_daemon_traceback = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   logger_name = oslo_privsep.daemon
Oct 09 16:45:19 compute-0 nova_compute[117331]:   thread_pool_size = 8
Oct 09 16:45:19 compute-0 nova_compute[117331]:   user = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: vmware: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   api_retry_count = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ca_file = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cache_prefix = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cluster_name = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   connection_pool_size = 10
Oct 09 16:45:19 compute-0 nova_compute[117331]:   console_delay_seconds = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   datastore_regex = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   host_ip = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   host_password = ***
Oct 09 16:45:19 compute-0 nova_compute[117331]:   host_port = 443
Oct 09 16:45:19 compute-0 nova_compute[117331]:   host_username = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   insecure = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   integration_bridge = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   maximum_objects = 100
Oct 09 16:45:19 compute-0 nova_compute[117331]:   pbm_default_policy = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   pbm_enabled = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   pbm_wsdl_location = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   serial_log_dir = /opt/vmware/vspc
Oct 09 16:45:19 compute-0 nova_compute[117331]:   serial_port_proxy_uri = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   serial_port_service_uri = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   task_poll_interval = 0.5
Oct 09 16:45:19 compute-0 nova_compute[117331]:   use_linked_clone = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vnc_keymap = en-us
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vnc_port = 5900
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vnc_port_total = 10000
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: vnc: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   auth_schemes = 
Oct 09 16:45:19 compute-0 nova_compute[117331]:     none
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enabled = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   novncproxy_base_url = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html
Oct 09 16:45:19 compute-0 nova_compute[117331]:   novncproxy_host = 0.0.0.0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   novncproxy_port = 6080
Oct 09 16:45:19 compute-0 nova_compute[117331]:   server_listen = ::0
Oct 09 16:45:19 compute-0 nova_compute[117331]:   server_proxyclient_address = 192.168.122.100
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vencrypt_ca_certs = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vencrypt_client_cert = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   vencrypt_client_key = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: workarounds: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   disable_compute_service_check_for_ffu = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   disable_deep_image_inspection = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   disable_fallback_pcpu_query = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   disable_group_policy_check_upcall = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   disable_libvirt_livesnapshot = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   disable_rootwrap = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enable_numa_live_migration = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   enable_qemu_monitor_announce_self = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ensure_libvirt_rbd_instance_dir_cleanup = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   handle_virt_lifecycle_events = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   libvirt_disable_apic = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   never_download_image_if_on_rbd = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   qemu_monitor_announce_self_count = 3
Oct 09 16:45:19 compute-0 nova_compute[117331]:   qemu_monitor_announce_self_interval = 1
Oct 09 16:45:19 compute-0 nova_compute[117331]:   reserve_disk_resource_for_image_cache = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   skip_cpu_compare_at_startup = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   skip_cpu_compare_on_dest = True
Oct 09 16:45:19 compute-0 nova_compute[117331]:   skip_hypervisor_version_check_on_lm = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   skip_reserve_in_use_ironic_nodes = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   unified_limits_count_pcpu_as_vcpu = False
Oct 09 16:45:19 compute-0 nova_compute[117331]:   wait_for_vif_plugged_event_during_hard_reboot = 
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: wsgi: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   api_paste_config = api-paste.ini
Oct 09 16:45:19 compute-0 nova_compute[117331]:   secure_proxy_ssl_header = None
Oct 09 16:45:19 compute-0 nova_compute[117331]: 
Oct 09 16:45:19 compute-0 nova_compute[117331]: zvm: 
Oct 09 16:45:19 compute-0 nova_compute[117331]:   ca_file = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   cloud_connector_url = None
Oct 09 16:45:19 compute-0 nova_compute[117331]:   image_tmp_path = /var/lib/nova/images
Oct 09 16:45:19 compute-0 nova_compute[117331]:   reachable_timeout = 300
Oct 09 16:45:21 compute-0 nova_compute[117331]: 2025-10-09 16:45:21.117 2 DEBUG nova.virt.libvirt.driver [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:45:21 compute-0 nova_compute[117331]: 2025-10-09 16:45:21.118 2 DEBUG nova.virt.libvirt.driver [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:45:21 compute-0 nova_compute[117331]: 2025-10-09 16:45:21.118 2 DEBUG nova.virt.libvirt.driver [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 09 16:45:21 compute-0 nova_compute[117331]: 2025-10-09 16:45:21.118 2 DEBUG nova.virt.libvirt.driver [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] No VIF found with MAC fa:16:3e:8b:c4:76, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 09 16:45:21 compute-0 podman[155852]: 2025-10-09 16:45:21.838860307 +0000 UTC m=+0.070283052 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 16:45:21 compute-0 podman[155851]: 2025-10-09 16:45:21.840094696 +0000 UTC m=+0.069807396 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 09 16:45:21 compute-0 sshd-session[155830]: Invalid user localadmin from 124.60.67.43 port 53844
Oct 09 16:45:22 compute-0 sshd-session[155830]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:45:22 compute-0 sshd-session[155830]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43
Oct 09 16:45:22 compute-0 nova_compute[117331]: 2025-10-09 16:45:22.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:22 compute-0 nova_compute[117331]: 2025-10-09 16:45:22.828 2 DEBUG oslo_concurrency.lockutils [None req-15568c02-2290-4f5f-b609-67d94bc90b39 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 7.897s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:45:23 compute-0 nova_compute[117331]: 2025-10-09 16:45:23.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:23 compute-0 sshd-session[155830]: Failed password for invalid user localadmin from 124.60.67.43 port 53844 ssh2
Oct 09 16:45:27 compute-0 nova_compute[117331]: 2025-10-09 16:45:27.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:28 compute-0 unix_chkpwd[155890]: password check failed for user (root)
Oct 09 16:45:28 compute-0 sshd-session[155888]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43  user=root
Oct 09 16:45:28 compute-0 nova_compute[117331]: 2025-10-09 16:45:28.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:29 compute-0 podman[127775]: time="2025-10-09T16:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:45:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:45:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3496 "" "Go-http-client/1.1"
Oct 09 16:45:30 compute-0 sshd-session[155888]: Failed password for root from 124.60.67.43 port 58728 ssh2
Oct 09 16:45:31 compute-0 openstack_network_exporter[129925]: ERROR   16:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:45:31 compute-0 openstack_network_exporter[129925]: ERROR   16:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:45:31 compute-0 openstack_network_exporter[129925]: ERROR   16:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:45:31 compute-0 openstack_network_exporter[129925]: ERROR   16:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:45:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:45:31 compute-0 openstack_network_exporter[129925]: ERROR   16:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:45:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:45:31 compute-0 sshd-session[155830]: Connection closed by invalid user localadmin 124.60.67.43 port 53844 [preauth]
Oct 09 16:45:31 compute-0 podman[155891]: 2025-10-09 16:45:31.871480077 +0000 UTC m=+0.103450478 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, config_id=edpm, build-date=2025-08-20T13:12:41, vcs-type=git)
Oct 09 16:45:31 compute-0 podman[155892]: 2025-10-09 16:45:31.891818341 +0000 UTC m=+0.116968452 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 09 16:45:32 compute-0 nova_compute[117331]: 2025-10-09 16:45:32.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:32 compute-0 sshd-session[155888]: Connection closed by authenticating user root 124.60.67.43 port 58728 [preauth]
Oct 09 16:45:33 compute-0 nova_compute[117331]: 2025-10-09 16:45:33.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:35.352 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:45:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:35.352 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:45:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:45:35.353 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:45:36 compute-0 ovn_controller[19752]: 2025-10-09T16:45:36Z|00311|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 09 16:45:37 compute-0 nova_compute[117331]: 2025-10-09 16:45:37.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:45:37 compute-0 nova_compute[117331]: 2025-10-09 16:45:37.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:38 compute-0 nova_compute[117331]: 2025-10-09 16:45:38.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:45:38 compute-0 nova_compute[117331]: 2025-10-09 16:45:38.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:39 compute-0 sshd-session[155934]: Invalid user ubuntu from 124.60.67.43 port 46256
Oct 09 16:45:41 compute-0 sshd-session[155934]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:45:41 compute-0 sshd-session[155934]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43
Oct 09 16:45:41 compute-0 nova_compute[117331]: 2025-10-09 16:45:41.383 2 DEBUG oslo_concurrency.lockutils [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Acquiring lock "d9cb0583-a454-4df1-80f9-dc9f184101f2" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:45:41 compute-0 nova_compute[117331]: 2025-10-09 16:45:41.384 2 DEBUG oslo_concurrency.lockutils [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:45:41 compute-0 nova_compute[117331]: 2025-10-09 16:45:41.892 2 DEBUG nova.objects.instance [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lazy-loading 'flavor' on Instance uuid d9cb0583-a454-4df1-80f9-dc9f184101f2 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:45:42 compute-0 nova_compute[117331]: 2025-10-09 16:45:42.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:45:42 compute-0 nova_compute[117331]: 2025-10-09 16:45:42.408 2 INFO nova.compute.manager [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Detaching volume e877a3bb-b53d-4cbb-b2a6-4c88f284da4f
Oct 09 16:45:42 compute-0 nova_compute[117331]: 2025-10-09 16:45:42.485 2 INFO nova.virt.block_device [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Attempting to driver detach volume e877a3bb-b53d-4cbb-b2a6-4c88f284da4f from mountpoint /dev/vdb
Oct 09 16:45:42 compute-0 nova_compute[117331]: 2025-10-09 16:45:42.493 2 DEBUG nova.virt.libvirt.driver [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Found disk vdb by alias ua-e877a3bb-b53d-4cbb-b2a6-4c88f284da4f _get_guest_disk_device /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:2825
Oct 09 16:45:42 compute-0 nova_compute[117331]: 2025-10-09 16:45:42.496 2 DEBUG nova.virt.libvirt.driver [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Found disk vdb by alias ua-e877a3bb-b53d-4cbb-b2a6-4c88f284da4f _get_guest_disk_device /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:2825
Oct 09 16:45:42 compute-0 nova_compute[117331]: 2025-10-09 16:45:42.496 2 DEBUG nova.virt.libvirt.driver [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Attempting to detach device vdb from instance d9cb0583-a454-4df1-80f9-dc9f184101f2 from the persistent domain config. _detach_from_persistent /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:2576
Oct 09 16:45:42 compute-0 nova_compute[117331]: 2025-10-09 16:45:42.496 2 DEBUG nova.virt.libvirt.guest [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] detach device xml: <disk type="file" device="disk">
Oct 09 16:45:42 compute-0 nova_compute[117331]:   <driver name="qemu" type="raw" cache="none" io="native"/>
Oct 09 16:45:42 compute-0 nova_compute[117331]:   <alias name="ua-e877a3bb-b53d-4cbb-b2a6-4c88f284da4f"/>
Oct 09 16:45:42 compute-0 nova_compute[117331]:   <source file="/var/lib/nova/mnt/56491fbd7607c943b28b644cf81cf3ff/volume-e877a3bb-b53d-4cbb-b2a6-4c88f284da4f"/>
Oct 09 16:45:42 compute-0 nova_compute[117331]:   <target dev="vdb" bus="virtio"/>
Oct 09 16:45:42 compute-0 nova_compute[117331]:   <serial>e877a3bb-b53d-4cbb-b2a6-4c88f284da4f</serial>
Oct 09 16:45:42 compute-0 nova_compute[117331]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 09 16:45:42 compute-0 nova_compute[117331]: </disk>
Oct 09 16:45:42 compute-0 nova_compute[117331]:  detach_device /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:466
Oct 09 16:45:42 compute-0 nova_compute[117331]: 2025-10-09 16:45:42.504 2 DEBUG nova.virt.libvirt.driver [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Found disk vdb by alias ua-e877a3bb-b53d-4cbb-b2a6-4c88f284da4f _get_guest_disk_device /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:2825
Oct 09 16:45:42 compute-0 nova_compute[117331]: 2025-10-09 16:45:42.504 2 WARNING nova.virt.libvirt.driver [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Failed to detach device vdb from instance d9cb0583-a454-4df1-80f9-dc9f184101f2 from the persistent domain config. Libvirt did not report any error but the device is still in the config.
Oct 09 16:45:42 compute-0 nova_compute[117331]: 2025-10-09 16:45:42.504 2 DEBUG nova.virt.libvirt.driver [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] (1/8): Attempting to detach device vdb with device alias ua-e877a3bb-b53d-4cbb-b2a6-4c88f284da4f from instance d9cb0583-a454-4df1-80f9-dc9f184101f2 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:2612
Oct 09 16:45:42 compute-0 nova_compute[117331]: 2025-10-09 16:45:42.505 2 DEBUG nova.virt.libvirt.guest [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] detach device xml: <disk type="file" device="disk">
Oct 09 16:45:42 compute-0 nova_compute[117331]:   <driver name="qemu" type="raw" cache="none" io="native"/>
Oct 09 16:45:42 compute-0 nova_compute[117331]:   <alias name="ua-e877a3bb-b53d-4cbb-b2a6-4c88f284da4f"/>
Oct 09 16:45:42 compute-0 nova_compute[117331]:   <source file="/var/lib/nova/mnt/56491fbd7607c943b28b644cf81cf3ff/volume-e877a3bb-b53d-4cbb-b2a6-4c88f284da4f"/>
Oct 09 16:45:42 compute-0 nova_compute[117331]:   <target dev="vdb" bus="virtio"/>
Oct 09 16:45:42 compute-0 nova_compute[117331]:   <serial>e877a3bb-b53d-4cbb-b2a6-4c88f284da4f</serial>
Oct 09 16:45:42 compute-0 nova_compute[117331]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 09 16:45:42 compute-0 nova_compute[117331]: </disk>
Oct 09 16:45:42 compute-0 nova_compute[117331]:  detach_device /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:466
Oct 09 16:45:42 compute-0 nova_compute[117331]: 2025-10-09 16:45:42.618 2 DEBUG nova.virt.libvirt.driver [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Start waiting for the detach event from libvirt for device vdb with device alias ua-e877a3bb-b53d-4cbb-b2a6-4c88f284da4f for instance d9cb0583-a454-4df1-80f9-dc9f184101f2 _detach_from_live_and_wait_for_event /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:2688
Oct 09 16:45:42 compute-0 nova_compute[117331]: 2025-10-09 16:45:42.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:42 compute-0 sshd-session[155934]: Failed password for invalid user ubuntu from 124.60.67.43 port 46256 ssh2
Oct 09 16:45:43 compute-0 nova_compute[117331]: 2025-10-09 16:45:43.302 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:45:43 compute-0 nova_compute[117331]: 2025-10-09 16:45:43.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:45:43 compute-0 nova_compute[117331]: 2025-10-09 16:45:43.819 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:45:43 compute-0 nova_compute[117331]: 2025-10-09 16:45:43.819 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:45:43 compute-0 nova_compute[117331]: 2025-10-09 16:45:43.819 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:45:43 compute-0 nova_compute[117331]: 2025-10-09 16:45:43.820 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:45:43 compute-0 podman[155939]: 2025-10-09 16:45:43.843096096 +0000 UTC m=+0.071736459 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 09 16:45:43 compute-0 nova_compute[117331]: 2025-10-09 16:45:43.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:44 compute-0 sshd-session[155934]: Connection closed by invalid user ubuntu 124.60.67.43 port 46256 [preauth]
Oct 09 16:45:44 compute-0 nova_compute[117331]: 2025-10-09 16:45:44.872 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:45:44 compute-0 nova_compute[117331]: 2025-10-09 16:45:44.945 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:45:44 compute-0 nova_compute[117331]: 2025-10-09 16:45:44.945 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:45:45 compute-0 nova_compute[117331]: 2025-10-09 16:45:45.008 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:45:45 compute-0 nova_compute[117331]: 2025-10-09 16:45:45.169 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:45:45 compute-0 nova_compute[117331]: 2025-10-09 16:45:45.171 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:45:45 compute-0 nova_compute[117331]: 2025-10-09 16:45:45.192 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.021s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:45:45 compute-0 nova_compute[117331]: 2025-10-09 16:45:45.193 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5885MB free_disk=73.21649551391602GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:45:45 compute-0 nova_compute[117331]: 2025-10-09 16:45:45.193 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:45:45 compute-0 nova_compute[117331]: 2025-10-09 16:45:45.193 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:45:45 compute-0 unix_chkpwd[155968]: password check failed for user (root)
Oct 09 16:45:45 compute-0 sshd-session[155556]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=120.48.149.106  user=root
Oct 09 16:45:46 compute-0 nova_compute[117331]: 2025-10-09 16:45:46.242 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Instance d9cb0583-a454-4df1-80f9-dc9f184101f2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 09 16:45:46 compute-0 nova_compute[117331]: 2025-10-09 16:45:46.243 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:45:46 compute-0 nova_compute[117331]: 2025-10-09 16:45:46.243 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:45:45 up 54 min,  0 user,  load average: 0.35, 0.38, 0.42\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_dbc5b562dfbf46d888285482e4fe52bb': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:45:46 compute-0 nova_compute[117331]: 2025-10-09 16:45:46.278 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:45:46 compute-0 nova_compute[117331]: 2025-10-09 16:45:46.784 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:45:46 compute-0 sshd-session[155556]: Failed password for root from 120.48.149.106 port 53170 ssh2
Oct 09 16:45:47 compute-0 nova_compute[117331]: 2025-10-09 16:45:47.306 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:45:47 compute-0 nova_compute[117331]: 2025-10-09 16:45:47.307 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.114s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:45:47 compute-0 nova_compute[117331]: 2025-10-09 16:45:47.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:47 compute-0 podman[155971]: 2025-10-09 16:45:47.851819134 +0000 UTC m=+0.075061535 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:45:48 compute-0 sshd-session[155556]: Connection closed by authenticating user root 120.48.149.106 port 53170 [preauth]
Oct 09 16:45:48 compute-0 nova_compute[117331]: 2025-10-09 16:45:48.308 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:45:48 compute-0 nova_compute[117331]: 2025-10-09 16:45:48.308 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:45:48 compute-0 nova_compute[117331]: 2025-10-09 16:45:48.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:49 compute-0 nova_compute[117331]: 2025-10-09 16:45:49.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:45:51 compute-0 nova_compute[117331]: 2025-10-09 16:45:51.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:45:52 compute-0 nova_compute[117331]: 2025-10-09 16:45:52.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:52 compute-0 podman[155996]: 2025-10-09 16:45:52.853902528 +0000 UTC m=+0.079990253 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 09 16:45:52 compute-0 podman[155995]: 2025-10-09 16:45:52.857927608 +0000 UTC m=+0.085745969 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:45:53 compute-0 nova_compute[117331]: 2025-10-09 16:45:53.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:57 compute-0 nova_compute[117331]: 2025-10-09 16:45:57.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:58 compute-0 nova_compute[117331]: 2025-10-09 16:45:58.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:45:59 compute-0 podman[127775]: time="2025-10-09T16:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:45:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20749 "" "Go-http-client/1.1"
Oct 09 16:45:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3496 "" "Go-http-client/1.1"
Oct 09 16:46:01 compute-0 openstack_network_exporter[129925]: ERROR   16:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:46:01 compute-0 openstack_network_exporter[129925]: ERROR   16:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:46:01 compute-0 openstack_network_exporter[129925]: ERROR   16:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:46:01 compute-0 openstack_network_exporter[129925]: ERROR   16:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:46:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:46:01 compute-0 openstack_network_exporter[129925]: ERROR   16:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:46:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:46:02 compute-0 nova_compute[117331]: 2025-10-09 16:46:02.621 2 WARNING nova.virt.libvirt.driver [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Waiting for libvirt event about the detach of device vdb with device alias ua-e877a3bb-b53d-4cbb-b2a6-4c88f284da4f from instance d9cb0583-a454-4df1-80f9-dc9f184101f2 is timed out.
Oct 09 16:46:02 compute-0 nova_compute[117331]: 2025-10-09 16:46:02.630 2 INFO nova.virt.libvirt.driver [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Successfully detached device vdb from instance d9cb0583-a454-4df1-80f9-dc9f184101f2 from the live domain config.
Oct 09 16:46:02 compute-0 nova_compute[117331]: 2025-10-09 16:46:02.634 2 DEBUG nova.virt.libvirt.volume.mount [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Got _HostMountState generation 0 get_state /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/mount.py:91
Oct 09 16:46:02 compute-0 nova_compute[117331]: 2025-10-09 16:46:02.634 2 DEBUG nova.virt.libvirt.volume.mount [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] _HostMountState.umount(vol_name=volume-e877a3bb-b53d-4cbb-b2a6-4c88f284da4f, mountpoint=/var/lib/nova/mnt/56491fbd7607c943b28b644cf81cf3ff) generation 0 umount /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/mount.py:349
Oct 09 16:46:02 compute-0 nova_compute[117331]: 2025-10-09 16:46:02.636 2 DEBUG nova.virt.libvirt.volume.mount [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Unmounting /var/lib/nova/mnt/56491fbd7607c943b28b644cf81cf3ff generation 0 _real_umount /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/mount.py:382
Oct 09 16:46:02 compute-0 systemd[1]: var-lib-nova-mnt-56491fbd7607c943b28b644cf81cf3ff.mount: Deactivated successfully.
Oct 09 16:46:02 compute-0 nova_compute[117331]: 2025-10-09 16:46:02.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:02 compute-0 nova_compute[117331]: 2025-10-09 16:46:02.688 2 DEBUG nova.virt.libvirt.volume.mount [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] _HostMountState.umount() for /var/lib/nova/mnt/56491fbd7607c943b28b644cf81cf3ff generation 0 completed successfully umount /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/mount.py:372
Oct 09 16:46:02 compute-0 podman[156037]: 2025-10-09 16:46:02.799767792 +0000 UTC m=+0.098072854 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_id=edpm, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 09 16:46:02 compute-0 podman[156038]: 2025-10-09 16:46:02.836165243 +0000 UTC m=+0.127449880 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=ovn_controller, io.buildah.version=1.41.4, config_id=ovn_controller, org.label-schema.build-date=20251007, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:46:03 compute-0 nova_compute[117331]: 2025-10-09 16:46:03.891 2 DEBUG oslo_concurrency.lockutils [None req-9e152268-f981-438b-818c-45a72b93797c edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 22.507s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:46:03 compute-0 nova_compute[117331]: 2025-10-09 16:46:03.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:05 compute-0 nova_compute[117331]: 2025-10-09 16:46:05.823 2 DEBUG oslo_concurrency.lockutils [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Acquiring lock "d9cb0583-a454-4df1-80f9-dc9f184101f2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:46:05 compute-0 nova_compute[117331]: 2025-10-09 16:46:05.824 2 DEBUG oslo_concurrency.lockutils [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:46:05 compute-0 nova_compute[117331]: 2025-10-09 16:46:05.824 2 DEBUG oslo_concurrency.lockutils [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Acquiring lock "d9cb0583-a454-4df1-80f9-dc9f184101f2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:46:05 compute-0 nova_compute[117331]: 2025-10-09 16:46:05.825 2 DEBUG oslo_concurrency.lockutils [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:46:05 compute-0 nova_compute[117331]: 2025-10-09 16:46:05.825 2 DEBUG oslo_concurrency.lockutils [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:46:05 compute-0 nova_compute[117331]: 2025-10-09 16:46:05.840 2 INFO nova.compute.manager [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Terminating instance
Oct 09 16:46:06 compute-0 nova_compute[117331]: 2025-10-09 16:46:06.356 2 DEBUG nova.compute.manager [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 09 16:46:06 compute-0 kernel: tapb7eb3265-ba (unregistering): left promiscuous mode
Oct 09 16:46:06 compute-0 NetworkManager[1028]: <info>  [1760028366.3819] device (tapb7eb3265-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 16:46:06 compute-0 ovn_controller[19752]: 2025-10-09T16:46:06Z|00312|binding|INFO|Releasing lport b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e from this chassis (sb_readonly=0)
Oct 09 16:46:06 compute-0 ovn_controller[19752]: 2025-10-09T16:46:06Z|00313|binding|INFO|Setting lport b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e down in Southbound
Oct 09 16:46:06 compute-0 ovn_controller[19752]: 2025-10-09T16:46:06Z|00314|binding|INFO|Removing iface tapb7eb3265-ba ovn-installed in OVS
Oct 09 16:46:06 compute-0 nova_compute[117331]: 2025-10-09 16:46:06.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:06.401 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:c4:76 10.100.0.9'], port_security=['fa:16:3e:8b:c4:76 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd9cb0583-a454-4df1-80f9-dc9f184101f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052ac062-eaae-4bf0-a40e-d100eb37efd8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbc5b562dfbf46d888285482e4fe52bb', 'neutron:revision_number': '5', 'neutron:security_group_ids': '279998fa-fe6f-4d20-a70e-de82ebfbc63a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d539eb18-5c5b-45b9-86bd-feaec6a9254a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>], logical_port=b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f37b2576840>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:46:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:06.402 28613 INFO neutron.agent.ovn.metadata.agent [-] Port b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e in datapath 052ac062-eaae-4bf0-a40e-d100eb37efd8 unbound from our chassis
Oct 09 16:46:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:06.403 28613 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 052ac062-eaae-4bf0-a40e-d100eb37efd8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 09 16:46:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:06.412 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[a35818e6-6e2f-46e6-9c39-46db1cbb52a7]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:46:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:06.412 28613 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8 namespace which is not needed anymore
Oct 09 16:46:06 compute-0 nova_compute[117331]: 2025-10-09 16:46:06.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:06 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000022.scope: Deactivated successfully.
Oct 09 16:46:06 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000022.scope: Consumed 13.956s CPU time.
Oct 09 16:46:06 compute-0 systemd-machined[77487]: Machine qemu-26-instance-00000022 terminated.
Oct 09 16:46:06 compute-0 nova_compute[117331]: 2025-10-09 16:46:06.537 2 DEBUG nova.compute.manager [req-d797f359-9c17-44ac-a999-3d9a695c968c req-f6ac993a-bfaf-4582-b412-53c043c0b5c0 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Received event network-vif-unplugged-b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:46:06 compute-0 nova_compute[117331]: 2025-10-09 16:46:06.537 2 DEBUG oslo_concurrency.lockutils [req-d797f359-9c17-44ac-a999-3d9a695c968c req-f6ac993a-bfaf-4582-b412-53c043c0b5c0 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "d9cb0583-a454-4df1-80f9-dc9f184101f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:46:06 compute-0 nova_compute[117331]: 2025-10-09 16:46:06.538 2 DEBUG oslo_concurrency.lockutils [req-d797f359-9c17-44ac-a999-3d9a695c968c req-f6ac993a-bfaf-4582-b412-53c043c0b5c0 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:46:06 compute-0 nova_compute[117331]: 2025-10-09 16:46:06.538 2 DEBUG oslo_concurrency.lockutils [req-d797f359-9c17-44ac-a999-3d9a695c968c req-f6ac993a-bfaf-4582-b412-53c043c0b5c0 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:46:06 compute-0 nova_compute[117331]: 2025-10-09 16:46:06.538 2 DEBUG nova.compute.manager [req-d797f359-9c17-44ac-a999-3d9a695c968c req-f6ac993a-bfaf-4582-b412-53c043c0b5c0 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] No waiting events found dispatching network-vif-unplugged-b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:46:06 compute-0 nova_compute[117331]: 2025-10-09 16:46:06.538 2 DEBUG nova.compute.manager [req-d797f359-9c17-44ac-a999-3d9a695c968c req-f6ac993a-bfaf-4582-b412-53c043c0b5c0 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Received event network-vif-unplugged-b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:46:06 compute-0 neutron-haproxy-ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8[155731]: [NOTICE]   (155735) : haproxy version is 3.0.5-8e879a5
Oct 09 16:46:06 compute-0 neutron-haproxy-ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8[155731]: [NOTICE]   (155735) : path to executable is /usr/sbin/haproxy
Oct 09 16:46:06 compute-0 neutron-haproxy-ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8[155731]: [WARNING]  (155735) : Exiting Master process...
Oct 09 16:46:06 compute-0 podman[156110]: 2025-10-09 16:46:06.573740579 +0000 UTC m=+0.039410507 container kill a0afccb9e54c26db32c31c4bdf89bfc2c97f5f764fff8cd14b9042522835cdfe (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:46:06 compute-0 neutron-haproxy-ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8[155731]: [ALERT]    (155735) : Current worker (155737) exited with code 143 (Terminated)
Oct 09 16:46:06 compute-0 neutron-haproxy-ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8[155731]: [WARNING]  (155735) : All workers exited. Exiting... (0)
Oct 09 16:46:06 compute-0 systemd[1]: libpod-a0afccb9e54c26db32c31c4bdf89bfc2c97f5f764fff8cd14b9042522835cdfe.scope: Deactivated successfully.
Oct 09 16:46:06 compute-0 podman[156129]: 2025-10-09 16:46:06.62537364 +0000 UTC m=+0.028665113 container died a0afccb9e54c26db32c31c4bdf89bfc2c97f5f764fff8cd14b9042522835cdfe (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:46:06 compute-0 nova_compute[117331]: 2025-10-09 16:46:06.638 2 INFO nova.virt.libvirt.driver [-] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Instance destroyed successfully.
Oct 09 16:46:06 compute-0 nova_compute[117331]: 2025-10-09 16:46:06.639 2 DEBUG nova.objects.instance [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lazy-loading 'resources' on Instance uuid d9cb0583-a454-4df1-80f9-dc9f184101f2 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 09 16:46:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a0afccb9e54c26db32c31c4bdf89bfc2c97f5f764fff8cd14b9042522835cdfe-userdata-shm.mount: Deactivated successfully.
Oct 09 16:46:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba56a72677d74942829417ac261d69735349962a64bb8195beb8fd6da3538b0e-merged.mount: Deactivated successfully.
Oct 09 16:46:06 compute-0 podman[156129]: 2025-10-09 16:46:06.676701121 +0000 UTC m=+0.079992614 container cleanup a0afccb9e54c26db32c31c4bdf89bfc2c97f5f764fff8cd14b9042522835cdfe (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4)
Oct 09 16:46:06 compute-0 systemd[1]: libpod-conmon-a0afccb9e54c26db32c31c4bdf89bfc2c97f5f764fff8cd14b9042522835cdfe.scope: Deactivated successfully.
Oct 09 16:46:06 compute-0 podman[156143]: 2025-10-09 16:46:06.69813288 +0000 UTC m=+0.069006250 container remove a0afccb9e54c26db32c31c4bdf89bfc2c97f5f764fff8cd14b9042522835cdfe (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:46:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:06.706 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[bf5251d8-ed52-4f8c-9465-065a42077476]: (4, ("Thu Oct  9 04:46:06 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8 (a0afccb9e54c26db32c31c4bdf89bfc2c97f5f764fff8cd14b9042522835cdfe)\na0afccb9e54c26db32c31c4bdf89bfc2c97f5f764fff8cd14b9042522835cdfe\nThu Oct  9 04:46:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8 (a0afccb9e54c26db32c31c4bdf89bfc2c97f5f764fff8cd14b9042522835cdfe)\na0afccb9e54c26db32c31c4bdf89bfc2c97f5f764fff8cd14b9042522835cdfe\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:46:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:06.707 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[d7af722c-5921-463c-9cda-a8b64e616166]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:46:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:06.708 28613 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/052ac062-eaae-4bf0-a40e-d100eb37efd8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/052ac062-eaae-4bf0-a40e-d100eb37efd8.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 09 16:46:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:06.708 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[334f8ddc-c079-41e1-9c8b-7b3eda7abbd0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:46:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:06.709 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052ac062-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:46:06 compute-0 nova_compute[117331]: 2025-10-09 16:46:06.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:06 compute-0 kernel: tap052ac062-e0: left promiscuous mode
Oct 09 16:46:06 compute-0 nova_compute[117331]: 2025-10-09 16:46:06.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:06.744 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[2a658a3d-48d9-4c7a-bcbc-0ee79da2a632]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:46:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:06.783 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[19cfc1bc-7e11-4b0e-8cac-d5b9e149b995]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:46:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:06.784 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[c289ede2-214c-47ef-8651-63d9cb811b6c]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:46:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:06.808 139687 DEBUG oslo.privsep.daemon [-] privsep: reply[8339e864-a709-405a-83b8-ff5eb824698c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 325027, 'reachable_time': 44305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 156181, 'error': None, 'target': 'ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:46:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d052ac062\x2deaae\x2d4bf0\x2da40e\x2dd100eb37efd8.mount: Deactivated successfully.
Oct 09 16:46:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:06.814 28727 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-052ac062-eaae-4bf0-a40e-d100eb37efd8 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 09 16:46:06 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:06.814 28727 DEBUG oslo.privsep.daemon [-] privsep: reply[ceac70f9-4128-46ef-a862-64297d74bbda]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.145 2 DEBUG nova.virt.libvirt.vif [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-09T16:44:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategyVolume-server-1045211055',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategyvolume-server-104521105',id=34,image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-09T16:45:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dbc5b562dfbf46d888285482e4fe52bb',ramdisk_id='',reservation_id='r-8s3dawuh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='b7d6e0af-25e4-4227-9dc6-43143898ceee',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteZoneMigrationStrategyVolume-905162571',owner_user_name='tempest-TestExecuteZoneMigrationStrategyVolume-905162571-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T16:45:08Z,user_data=None,user_id='edda4b76bea746b7aec969bc10f68f14',uuid=d9cb0583-a454-4df1-80f9-dc9f184101f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "address": "fa:16:3e:8b:c4:76", "network": {"id": "052ac062-eaae-4bf0-a40e-d100eb37efd8", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategyVolume-1366874524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a62963f2914ca4a7e5f2a6b9a36e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7eb3265-ba", "ovs_interfaceid": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.146 2 DEBUG nova.network.os_vif_util [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Converting VIF {"id": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "address": "fa:16:3e:8b:c4:76", "network": {"id": "052ac062-eaae-4bf0-a40e-d100eb37efd8", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategyVolume-1366874524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a62963f2914ca4a7e5f2a6b9a36e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7eb3265-ba", "ovs_interfaceid": "b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.147 2 DEBUG nova.network.os_vif_util [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:c4:76,bridge_name='br-int',has_traffic_filtering=True,id=b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e,network=Network(052ac062-eaae-4bf0-a40e-d100eb37efd8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7eb3265-ba') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.148 2 DEBUG os_vif [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:c4:76,bridge_name='br-int',has_traffic_filtering=True,id=b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e,network=Network(052ac062-eaae-4bf0-a40e-d100eb37efd8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7eb3265-ba') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.151 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7eb3265-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.157 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=afa3e686-bd02-466d-bec1-42f93daaed13) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.162 2 INFO os_vif [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:c4:76,bridge_name='br-int',has_traffic_filtering=True,id=b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e,network=Network(052ac062-eaae-4bf0-a40e-d100eb37efd8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7eb3265-ba')
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.163 2 INFO nova.virt.libvirt.driver [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Deleting instance files /var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2_del
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.164 2 INFO nova.virt.libvirt.driver [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Deletion of /var/lib/nova/instances/d9cb0583-a454-4df1-80f9-dc9f184101f2_del complete
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.681 2 INFO nova.compute.manager [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Took 1.32 seconds to destroy the instance on the hypervisor.
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.682 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.683 2 DEBUG nova.compute.manager [-] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.683 2 DEBUG nova.network.neutron [-] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.683 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:46:07 compute-0 nova_compute[117331]: 2025-10-09 16:46:07.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:08 compute-0 nova_compute[117331]: 2025-10-09 16:46:08.554 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 09 16:46:08 compute-0 nova_compute[117331]: 2025-10-09 16:46:08.618 2 DEBUG nova.compute.manager [req-d3789313-d0b9-4537-81d8-dd1f1110d2c4 req-e5919cb7-48de-4589-a23d-708bf28c85ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Received event network-vif-unplugged-b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:46:08 compute-0 nova_compute[117331]: 2025-10-09 16:46:08.619 2 DEBUG oslo_concurrency.lockutils [req-d3789313-d0b9-4537-81d8-dd1f1110d2c4 req-e5919cb7-48de-4589-a23d-708bf28c85ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Acquiring lock "d9cb0583-a454-4df1-80f9-dc9f184101f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:46:08 compute-0 nova_compute[117331]: 2025-10-09 16:46:08.620 2 DEBUG oslo_concurrency.lockutils [req-d3789313-d0b9-4537-81d8-dd1f1110d2c4 req-e5919cb7-48de-4589-a23d-708bf28c85ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:46:08 compute-0 nova_compute[117331]: 2025-10-09 16:46:08.620 2 DEBUG oslo_concurrency.lockutils [req-d3789313-d0b9-4537-81d8-dd1f1110d2c4 req-e5919cb7-48de-4589-a23d-708bf28c85ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:46:08 compute-0 nova_compute[117331]: 2025-10-09 16:46:08.621 2 DEBUG nova.compute.manager [req-d3789313-d0b9-4537-81d8-dd1f1110d2c4 req-e5919cb7-48de-4589-a23d-708bf28c85ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] No waiting events found dispatching network-vif-unplugged-b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 09 16:46:08 compute-0 nova_compute[117331]: 2025-10-09 16:46:08.621 2 DEBUG nova.compute.manager [req-d3789313-d0b9-4537-81d8-dd1f1110d2c4 req-e5919cb7-48de-4589-a23d-708bf28c85ad ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Received event network-vif-unplugged-b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 09 16:46:09 compute-0 nova_compute[117331]: 2025-10-09 16:46:09.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:09.449 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:46:09 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:09.451 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:46:09 compute-0 nova_compute[117331]: 2025-10-09 16:46:09.594 2 DEBUG nova.compute.manager [req-709076ad-d7a5-4f11-854f-e36f0a4dfe6b req-9b7913dc-7ba7-4df6-8657-6b4e45e90688 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Received event network-vif-deleted-b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 09 16:46:09 compute-0 nova_compute[117331]: 2025-10-09 16:46:09.594 2 INFO nova.compute.manager [req-709076ad-d7a5-4f11-854f-e36f0a4dfe6b req-9b7913dc-7ba7-4df6-8657-6b4e45e90688 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Neutron deleted interface b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e; detaching it from the instance and deleting it from the info cache
Oct 09 16:46:09 compute-0 nova_compute[117331]: 2025-10-09 16:46:09.595 2 DEBUG nova.network.neutron [req-709076ad-d7a5-4f11-854f-e36f0a4dfe6b req-9b7913dc-7ba7-4df6-8657-6b4e45e90688 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:46:10 compute-0 nova_compute[117331]: 2025-10-09 16:46:10.035 2 DEBUG nova.network.neutron [-] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 09 16:46:10 compute-0 nova_compute[117331]: 2025-10-09 16:46:10.104 2 DEBUG nova.compute.manager [req-709076ad-d7a5-4f11-854f-e36f0a4dfe6b req-9b7913dc-7ba7-4df6-8657-6b4e45e90688 ee15961ad2be4bada6c106ac4f69f422 c076c42271004e7aa86e84416e33f826 - - default default] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Detach interface failed, port_id=b7eb3265-ba4d-4b59-9f4f-d449f3a8d71e, reason: Instance d9cb0583-a454-4df1-80f9-dc9f184101f2 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 09 16:46:10 compute-0 nova_compute[117331]: 2025-10-09 16:46:10.543 2 INFO nova.compute.manager [-] [instance: d9cb0583-a454-4df1-80f9-dc9f184101f2] Took 2.86 seconds to deallocate network for instance.
Oct 09 16:46:11 compute-0 nova_compute[117331]: 2025-10-09 16:46:11.070 2 DEBUG oslo_concurrency.lockutils [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:46:11 compute-0 nova_compute[117331]: 2025-10-09 16:46:11.071 2 DEBUG oslo_concurrency.lockutils [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:46:11 compute-0 nova_compute[117331]: 2025-10-09 16:46:11.127 2 DEBUG nova.compute.provider_tree [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:46:11 compute-0 nova_compute[117331]: 2025-10-09 16:46:11.635 2 DEBUG nova.scheduler.client.report [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:46:12 compute-0 nova_compute[117331]: 2025-10-09 16:46:12.148 2 DEBUG oslo_concurrency.lockutils [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.077s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:46:12 compute-0 nova_compute[117331]: 2025-10-09 16:46:12.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:12 compute-0 nova_compute[117331]: 2025-10-09 16:46:12.191 2 INFO nova.scheduler.client.report [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Deleted allocations for instance d9cb0583-a454-4df1-80f9-dc9f184101f2
Oct 09 16:46:12 compute-0 nova_compute[117331]: 2025-10-09 16:46:12.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:13 compute-0 nova_compute[117331]: 2025-10-09 16:46:13.255 2 DEBUG oslo_concurrency.lockutils [None req-87cda506-0c06-4a45-8b5c-f177ed3d9d55 edda4b76bea746b7aec969bc10f68f14 dbc5b562dfbf46d888285482e4fe52bb - - default default] Lock "d9cb0583-a454-4df1-80f9-dc9f184101f2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.431s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:46:14 compute-0 podman[156183]: 2025-10-09 16:46:14.8696483 +0000 UTC m=+0.088983683 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 09 16:46:15 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:15.453 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:46:17 compute-0 nova_compute[117331]: 2025-10-09 16:46:17.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:17 compute-0 nova_compute[117331]: 2025-10-09 16:46:17.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:17 compute-0 nova_compute[117331]: 2025-10-09 16:46:17.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:18 compute-0 podman[156204]: 2025-10-09 16:46:18.850851873 +0000 UTC m=+0.076606967 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 09 16:46:22 compute-0 nova_compute[117331]: 2025-10-09 16:46:22.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:22 compute-0 nova_compute[117331]: 2025-10-09 16:46:22.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:23 compute-0 podman[156230]: 2025-10-09 16:46:23.87480665 +0000 UTC m=+0.098827119 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 09 16:46:23 compute-0 podman[156231]: 2025-10-09 16:46:23.880232045 +0000 UTC m=+0.083809127 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.4, container_name=iscsid, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 09 16:46:27 compute-0 nova_compute[117331]: 2025-10-09 16:46:27.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:27 compute-0 nova_compute[117331]: 2025-10-09 16:46:27.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:29 compute-0 podman[127775]: time="2025-10-09T16:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:46:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:46:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3034 "" "Go-http-client/1.1"
Oct 09 16:46:31 compute-0 openstack_network_exporter[129925]: ERROR   16:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:46:31 compute-0 openstack_network_exporter[129925]: ERROR   16:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:46:31 compute-0 openstack_network_exporter[129925]: ERROR   16:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:46:31 compute-0 openstack_network_exporter[129925]: ERROR   16:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:46:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:46:31 compute-0 openstack_network_exporter[129925]: ERROR   16:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:46:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:46:32 compute-0 nova_compute[117331]: 2025-10-09 16:46:32.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:32 compute-0 nova_compute[117331]: 2025-10-09 16:46:32.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:33 compute-0 podman[156268]: 2025-10-09 16:46:33.840152001 +0000 UTC m=+0.079275761 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Oct 09 16:46:33 compute-0 podman[156269]: 2025-10-09 16:46:33.84821003 +0000 UTC m=+0.084597912 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 09 16:46:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:35.354 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:46:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:35.354 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:46:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:46:35.354 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:46:36 compute-0 sshd-session[155969]: Invalid user pi from 124.60.67.43 port 47794
Oct 09 16:46:37 compute-0 nova_compute[117331]: 2025-10-09 16:46:37.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:37 compute-0 nova_compute[117331]: 2025-10-09 16:46:37.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:38 compute-0 nova_compute[117331]: 2025-10-09 16:46:38.315 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:46:38 compute-0 nova_compute[117331]: 2025-10-09 16:46:38.316 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:46:42 compute-0 nova_compute[117331]: 2025-10-09 16:46:42.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:42 compute-0 nova_compute[117331]: 2025-10-09 16:46:42.308 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:46:42 compute-0 nova_compute[117331]: 2025-10-09 16:46:42.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:43 compute-0 sshd-session[155969]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:46:43 compute-0 sshd-session[155969]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43
Oct 09 16:46:44 compute-0 nova_compute[117331]: 2025-10-09 16:46:44.308 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:46:44 compute-0 nova_compute[117331]: 2025-10-09 16:46:44.308 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:46:44 compute-0 nova_compute[117331]: 2025-10-09 16:46:44.308 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:46:45 compute-0 nova_compute[117331]: 2025-10-09 16:46:45.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:46:45 compute-0 sshd-session[156229]: error: kex_exchange_identification: read: Connection timed out
Oct 09 16:46:45 compute-0 sshd-session[156229]: banner exchange: Connection from 120.48.149.106 port 33920: Connection timed out
Oct 09 16:46:45 compute-0 sshd-session[155969]: Failed password for invalid user pi from 124.60.67.43 port 47794 ssh2
Oct 09 16:46:45 compute-0 nova_compute[117331]: 2025-10-09 16:46:45.821 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:46:45 compute-0 nova_compute[117331]: 2025-10-09 16:46:45.822 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:46:45 compute-0 nova_compute[117331]: 2025-10-09 16:46:45.822 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:46:45 compute-0 nova_compute[117331]: 2025-10-09 16:46:45.822 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:46:45 compute-0 podman[156316]: 2025-10-09 16:46:45.880989505 +0000 UTC m=+0.106997272 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 09 16:46:46 compute-0 nova_compute[117331]: 2025-10-09 16:46:46.046 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:46:46 compute-0 nova_compute[117331]: 2025-10-09 16:46:46.047 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:46:46 compute-0 nova_compute[117331]: 2025-10-09 16:46:46.075 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.028s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:46:46 compute-0 nova_compute[117331]: 2025-10-09 16:46:46.076 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6074MB free_disk=73.24528503417969GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:46:46 compute-0 nova_compute[117331]: 2025-10-09 16:46:46.077 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:46:46 compute-0 nova_compute[117331]: 2025-10-09 16:46:46.077 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:46:47 compute-0 nova_compute[117331]: 2025-10-09 16:46:47.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:47 compute-0 nova_compute[117331]: 2025-10-09 16:46:47.180 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:46:47 compute-0 nova_compute[117331]: 2025-10-09 16:46:47.180 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:46:46 up 55 min,  0 user,  load average: 0.18, 0.32, 0.40\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:46:47 compute-0 nova_compute[117331]: 2025-10-09 16:46:47.199 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing inventories for resource provider 593051b8-2000-437f-a915-2616fc8b1671 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 09 16:46:47 compute-0 nova_compute[117331]: 2025-10-09 16:46:47.213 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating ProviderTree inventory for provider 593051b8-2000-437f-a915-2616fc8b1671 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 09 16:46:47 compute-0 nova_compute[117331]: 2025-10-09 16:46:47.214 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating inventory in ProviderTree for provider 593051b8-2000-437f-a915-2616fc8b1671 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 09 16:46:47 compute-0 nova_compute[117331]: 2025-10-09 16:46:47.229 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing aggregate associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 09 16:46:47 compute-0 nova_compute[117331]: 2025-10-09 16:46:47.249 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing trait associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, traits: HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ARCH_X86_64,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_TIS,HW_ARCH_X86_64,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOUND_MODEL_AC97,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_RESCUE_BFV,COMPUTE_SOUND_MODEL_SB16,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 09 16:46:47 compute-0 nova_compute[117331]: 2025-10-09 16:46:47.270 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:46:47 compute-0 nova_compute[117331]: 2025-10-09 16:46:47.778 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:46:47 compute-0 nova_compute[117331]: 2025-10-09 16:46:47.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:48 compute-0 nova_compute[117331]: 2025-10-09 16:46:48.291 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:46:48 compute-0 nova_compute[117331]: 2025-10-09 16:46:48.292 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.214s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:46:49 compute-0 podman[156340]: 2025-10-09 16:46:49.835256503 +0000 UTC m=+0.063927379 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:46:50 compute-0 ovn_controller[19752]: 2025-10-09T16:46:50Z|00315|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Oct 09 16:46:52 compute-0 nova_compute[117331]: 2025-10-09 16:46:52.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:53 compute-0 nova_compute[117331]: 2025-10-09 16:46:53.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:54 compute-0 nova_compute[117331]: 2025-10-09 16:46:54.292 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:46:54 compute-0 nova_compute[117331]: 2025-10-09 16:46:54.292 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:46:54 compute-0 nova_compute[117331]: 2025-10-09 16:46:54.302 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:46:54 compute-0 podman[156367]: 2025-10-09 16:46:54.839660131 +0000 UTC m=+0.063316678 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 09 16:46:54 compute-0 podman[156368]: 2025-10-09 16:46:54.881560108 +0000 UTC m=+0.092201637 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251007, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 09 16:46:57 compute-0 nova_compute[117331]: 2025-10-09 16:46:57.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:58 compute-0 nova_compute[117331]: 2025-10-09 16:46:58.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:46:59 compute-0 podman[127775]: time="2025-10-09T16:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:46:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:46:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3034 "" "Go-http-client/1.1"
Oct 09 16:47:00 compute-0 sshd-session[156365]: Invalid user test from 124.60.67.43 port 36828
Oct 09 16:47:00 compute-0 sshd-session[156365]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:47:00 compute-0 sshd-session[156365]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43
Oct 09 16:47:01 compute-0 openstack_network_exporter[129925]: ERROR   16:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:47:01 compute-0 openstack_network_exporter[129925]: ERROR   16:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:47:01 compute-0 openstack_network_exporter[129925]: ERROR   16:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:47:01 compute-0 openstack_network_exporter[129925]: ERROR   16:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:47:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:47:01 compute-0 openstack_network_exporter[129925]: ERROR   16:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:47:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:47:02 compute-0 nova_compute[117331]: 2025-10-09 16:47:02.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:02 compute-0 sshd-session[156365]: Failed password for invalid user test from 124.60.67.43 port 36828 ssh2
Oct 09 16:47:03 compute-0 nova_compute[117331]: 2025-10-09 16:47:03.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:03 compute-0 sshd-session[156365]: Connection closed by invalid user test 124.60.67.43 port 36828 [preauth]
Oct 09 16:47:04 compute-0 sshd-session[155969]: Connection closed by invalid user pi 124.60.67.43 port 47794 [preauth]
Oct 09 16:47:04 compute-0 podman[156412]: 2025-10-09 16:47:04.855844335 +0000 UTC m=+0.082628208 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, architecture=x86_64, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Oct 09 16:47:04 compute-0 podman[156413]: 2025-10-09 16:47:04.874270938 +0000 UTC m=+0.104645837 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_controller, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct 09 16:47:07 compute-0 nova_compute[117331]: 2025-10-09 16:47:07.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:08 compute-0 nova_compute[117331]: 2025-10-09 16:47:08.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:11 compute-0 sshd-session[156460]: Accepted publickey for zuul from 192.168.122.10 port 54918 ssh2: ECDSA SHA256:2Vdz7kVNDZnmAnEBdeIC9De7MGoQwU7bxSCyJABiYXo
Oct 09 16:47:11 compute-0 systemd-logind[841]: New session 13 of user zuul.
Oct 09 16:47:11 compute-0 systemd[1]: Created slice User Slice of UID 1000.
Oct 09 16:47:11 compute-0 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 09 16:47:11 compute-0 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 09 16:47:11 compute-0 systemd[1]: Starting User Manager for UID 1000...
Oct 09 16:47:11 compute-0 systemd[156464]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 16:47:11 compute-0 unix_chkpwd[156467]: password check failed for user (root)
Oct 09 16:47:11 compute-0 sshd-session[156410]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43  user=root
Oct 09 16:47:11 compute-0 systemd[156464]: Queued start job for default target Main User Target.
Oct 09 16:47:11 compute-0 systemd[156464]: Created slice User Application Slice.
Oct 09 16:47:11 compute-0 systemd[156464]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 09 16:47:11 compute-0 systemd[156464]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 16:47:11 compute-0 systemd[156464]: Reached target Paths.
Oct 09 16:47:11 compute-0 systemd[156464]: Reached target Timers.
Oct 09 16:47:11 compute-0 systemd[156464]: Starting D-Bus User Message Bus Socket...
Oct 09 16:47:11 compute-0 systemd[156464]: Starting Create User's Volatile Files and Directories...
Oct 09 16:47:11 compute-0 systemd[156464]: Finished Create User's Volatile Files and Directories.
Oct 09 16:47:11 compute-0 systemd[156464]: Listening on D-Bus User Message Bus Socket.
Oct 09 16:47:11 compute-0 systemd[156464]: Reached target Sockets.
Oct 09 16:47:11 compute-0 systemd[156464]: Reached target Basic System.
Oct 09 16:47:11 compute-0 systemd[156464]: Reached target Main User Target.
Oct 09 16:47:11 compute-0 systemd[156464]: Startup finished in 150ms.
Oct 09 16:47:11 compute-0 systemd[1]: Started User Manager for UID 1000.
Oct 09 16:47:11 compute-0 systemd[1]: Started Session 13 of User zuul.
Oct 09 16:47:11 compute-0 sshd-session[156460]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 16:47:12 compute-0 sudo[156481]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 09 16:47:12 compute-0 sudo[156481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:47:12 compute-0 nova_compute[117331]: 2025-10-09 16:47:12.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:13 compute-0 nova_compute[117331]: 2025-10-09 16:47:13.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:13 compute-0 sshd-session[156410]: Failed password for root from 124.60.67.43 port 59454 ssh2
Oct 09 16:47:14 compute-0 sshd-session[156410]: Connection closed by authenticating user root 124.60.67.43 port 59454 [preauth]
Oct 09 16:47:16 compute-0 podman[156628]: 2025-10-09 16:47:16.086617519 +0000 UTC m=+0.080153270 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 09 16:47:17 compute-0 ovs-vsctl[156674]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 09 16:47:17 compute-0 nova_compute[117331]: 2025-10-09 16:47:17.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:17 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 156505 (sos)
Oct 09 16:47:17 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 09 16:47:17 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 09 16:47:18 compute-0 nova_compute[117331]: 2025-10-09 16:47:18.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:18 compute-0 virtqemud[117629]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 09 16:47:18 compute-0 virtqemud[117629]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 09 16:47:18 compute-0 virtqemud[117629]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 09 16:47:18 compute-0 kernel: block sr0: the capability attribute has been deprecated.
Oct 09 16:47:19 compute-0 crontab[157100]: (root) LIST (root)
Oct 09 16:47:19 compute-0 rsyslogd[1282]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 16:47:19 compute-0 rsyslogd[1282]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 16:47:20 compute-0 podman[157185]: 2025-10-09 16:47:20.8236813 +0000 UTC m=+0.059030200 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 09 16:47:21 compute-0 systemd[1]: Starting Hostname Service...
Oct 09 16:47:21 compute-0 systemd[1]: Started Hostname Service.
Oct 09 16:47:22 compute-0 nova_compute[117331]: 2025-10-09 16:47:22.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:23 compute-0 nova_compute[117331]: 2025-10-09 16:47:23.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:25 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 09 16:47:25 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 09 16:47:25 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 09 16:47:25 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 09 16:47:25 compute-0 kernel: cfg80211: failed to load regulatory.db
Oct 09 16:47:25 compute-0 podman[157586]: 2025-10-09 16:47:25.850280252 +0000 UTC m=+0.072584525 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, container_name=iscsid, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:47:25 compute-0 podman[157580]: 2025-10-09 16:47:25.850311433 +0000 UTC m=+0.071325655 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent)
Oct 09 16:47:27 compute-0 nova_compute[117331]: 2025-10-09 16:47:27.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:27 compute-0 ovs-appctl[158272]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 09 16:47:27 compute-0 ovs-appctl[158277]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 09 16:47:27 compute-0 ovs-appctl[158287]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 09 16:47:28 compute-0 nova_compute[117331]: 2025-10-09 16:47:28.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:28 compute-0 unix_chkpwd[158749]: password check failed for user (root)
Oct 09 16:47:28 compute-0 sshd-session[156623]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43  user=root
Oct 09 16:47:29 compute-0 podman[127775]: time="2025-10-09T16:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:47:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:47:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3035 "" "Go-http-client/1.1"
Oct 09 16:47:31 compute-0 sshd-session[156623]: Failed password for root from 124.60.67.43 port 57578 ssh2
Oct 09 16:47:31 compute-0 openstack_network_exporter[129925]: ERROR   16:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:47:31 compute-0 openstack_network_exporter[129925]: ERROR   16:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:47:31 compute-0 openstack_network_exporter[129925]: ERROR   16:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:47:31 compute-0 openstack_network_exporter[129925]: ERROR   16:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:47:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:47:31 compute-0 openstack_network_exporter[129925]: ERROR   16:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:47:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:47:32 compute-0 nova_compute[117331]: 2025-10-09 16:47:32.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:33 compute-0 nova_compute[117331]: 2025-10-09 16:47:33.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:34 compute-0 podman[159488]: 2025-10-09 16:47:34.977609812 +0000 UTC m=+0.081773191 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1755695350, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public)
Oct 09 16:47:34 compute-0 podman[159499]: 2025-10-09 16:47:34.989312378 +0000 UTC m=+0.087523656 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251007)
Oct 09 16:47:35 compute-0 virtqemud[117629]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 09 16:47:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:47:35.355 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:47:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:47:35.356 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:47:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:47:35.356 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:47:37 compute-0 nova_compute[117331]: 2025-10-09 16:47:37.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:37 compute-0 systemd[1]: Starting Time & Date Service...
Oct 09 16:47:37 compute-0 sshd-session[159244]: Invalid user admin from 124.60.67.43 port 43662
Oct 09 16:47:37 compute-0 systemd[1]: Started Time & Date Service.
Oct 09 16:47:37 compute-0 sshd-session[159244]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:47:37 compute-0 sshd-session[159244]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43
Oct 09 16:47:38 compute-0 nova_compute[117331]: 2025-10-09 16:47:38.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:39 compute-0 nova_compute[117331]: 2025-10-09 16:47:39.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:47:39 compute-0 sshd-session[159244]: Failed password for invalid user admin from 124.60.67.43 port 43662 ssh2
Oct 09 16:47:40 compute-0 nova_compute[117331]: 2025-10-09 16:47:40.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:47:40 compute-0 sshd-session[159244]: Connection closed by invalid user admin 124.60.67.43 port 43662 [preauth]
Oct 09 16:47:42 compute-0 nova_compute[117331]: 2025-10-09 16:47:42.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:43 compute-0 nova_compute[117331]: 2025-10-09 16:47:43.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:43 compute-0 sshd-session[159752]: Invalid user guest from 124.60.67.43 port 49996
Oct 09 16:47:44 compute-0 nova_compute[117331]: 2025-10-09 16:47:44.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:47:44 compute-0 nova_compute[117331]: 2025-10-09 16:47:44.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:47:44 compute-0 nova_compute[117331]: 2025-10-09 16:47:44.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:47:44 compute-0 sshd-session[159752]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:47:44 compute-0 sshd-session[159752]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.60.67.43
Oct 09 16:47:45 compute-0 nova_compute[117331]: 2025-10-09 16:47:45.303 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:47:46 compute-0 sshd-session[156623]: Connection closed by authenticating user root 124.60.67.43 port 57578 [preauth]
Oct 09 16:47:46 compute-0 nova_compute[117331]: 2025-10-09 16:47:46.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:47:46 compute-0 sshd-session[159752]: Failed password for invalid user guest from 124.60.67.43 port 49996 ssh2
Oct 09 16:47:46 compute-0 nova_compute[117331]: 2025-10-09 16:47:46.823 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:47:46 compute-0 nova_compute[117331]: 2025-10-09 16:47:46.823 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:47:46 compute-0 nova_compute[117331]: 2025-10-09 16:47:46.824 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:47:46 compute-0 nova_compute[117331]: 2025-10-09 16:47:46.824 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:47:46 compute-0 podman[159758]: 2025-10-09 16:47:46.83635119 +0000 UTC m=+0.071311765 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:47:47 compute-0 nova_compute[117331]: 2025-10-09 16:47:47.011 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:47:47 compute-0 nova_compute[117331]: 2025-10-09 16:47:47.013 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:47:47 compute-0 nova_compute[117331]: 2025-10-09 16:47:47.059 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.045s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:47:47 compute-0 nova_compute[117331]: 2025-10-09 16:47:47.060 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5684MB free_disk=73.00170516967773GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:47:47 compute-0 nova_compute[117331]: 2025-10-09 16:47:47.060 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:47:47 compute-0 nova_compute[117331]: 2025-10-09 16:47:47.060 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:47:47 compute-0 nova_compute[117331]: 2025-10-09 16:47:47.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:48 compute-0 nova_compute[117331]: 2025-10-09 16:47:48.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:48 compute-0 nova_compute[117331]: 2025-10-09 16:47:48.171 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:47:48 compute-0 nova_compute[117331]: 2025-10-09 16:47:48.172 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:47:47 up 56 min,  0 user,  load average: 0.68, 0.42, 0.42\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:47:48 compute-0 nova_compute[117331]: 2025-10-09 16:47:48.219 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:47:48 compute-0 nova_compute[117331]: 2025-10-09 16:47:48.730 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:47:49 compute-0 nova_compute[117331]: 2025-10-09 16:47:49.246 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:47:49 compute-0 nova_compute[117331]: 2025-10-09 16:47:49.248 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.188s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:47:49 compute-0 sshd-session[159752]: Connection closed by invalid user guest 124.60.67.43 port 49996 [preauth]
Oct 09 16:47:51 compute-0 podman[159779]: 2025-10-09 16:47:51.452526853 +0000 UTC m=+0.071959695 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 09 16:47:52 compute-0 nova_compute[117331]: 2025-10-09 16:47:52.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:53 compute-0 nova_compute[117331]: 2025-10-09 16:47:53.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:54 compute-0 nova_compute[117331]: 2025-10-09 16:47:54.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:47:54 compute-0 nova_compute[117331]: 2025-10-09 16:47:54.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:47:54 compute-0 nova_compute[117331]: 2025-10-09 16:47:54.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:47:54 compute-0 nova_compute[117331]: 2025-10-09 16:47:54.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 09 16:47:54 compute-0 nova_compute[117331]: 2025-10-09 16:47:54.814 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 09 16:47:56 compute-0 sudo[156481]: pam_unix(sudo:session): session closed for user root
Oct 09 16:47:56 compute-0 sshd-session[156480]: Received disconnect from 192.168.122.10 port 54918:11: disconnected by user
Oct 09 16:47:56 compute-0 sshd-session[156480]: Disconnected from user zuul 192.168.122.10 port 54918
Oct 09 16:47:56 compute-0 sshd-session[156460]: pam_unix(sshd:session): session closed for user zuul
Oct 09 16:47:56 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Oct 09 16:47:56 compute-0 systemd[1]: session-13.scope: Consumed 1min 13.020s CPU time, 577.9M memory peak, read 173.1M from disk, written 19.3M to disk.
Oct 09 16:47:56 compute-0 systemd-logind[841]: Session 13 logged out. Waiting for processes to exit.
Oct 09 16:47:56 compute-0 systemd-logind[841]: Removed session 13.
Oct 09 16:47:56 compute-0 sshd-session[159807]: Accepted publickey for zuul from 192.168.122.10 port 58922 ssh2: ECDSA SHA256:2Vdz7kVNDZnmAnEBdeIC9De7MGoQwU7bxSCyJABiYXo
Oct 09 16:47:56 compute-0 systemd-logind[841]: New session 15 of user zuul.
Oct 09 16:47:56 compute-0 systemd[1]: Started Session 15 of User zuul.
Oct 09 16:47:56 compute-0 sshd-session[159807]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 16:47:56 compute-0 podman[159808]: 2025-10-09 16:47:56.789500887 +0000 UTC m=+0.170762682 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_id=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:47:56 compute-0 podman[159806]: 2025-10-09 16:47:56.795790239 +0000 UTC m=+0.174545714 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 09 16:47:56 compute-0 sudo[159847]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-10-09-ynzfqkc.tar.xz
Oct 09 16:47:56 compute-0 sudo[159847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:47:56 compute-0 sudo[159847]: pam_unix(sudo:session): session closed for user root
Oct 09 16:47:56 compute-0 sshd-session[159846]: Received disconnect from 192.168.122.10 port 58922:11: disconnected by user
Oct 09 16:47:56 compute-0 sshd-session[159846]: Disconnected from user zuul 192.168.122.10 port 58922
Oct 09 16:47:56 compute-0 sshd-session[159807]: pam_unix(sshd:session): session closed for user zuul
Oct 09 16:47:56 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Oct 09 16:47:56 compute-0 systemd-logind[841]: Session 15 logged out. Waiting for processes to exit.
Oct 09 16:47:56 compute-0 systemd-logind[841]: Removed session 15.
Oct 09 16:47:56 compute-0 sshd-session[159872]: Accepted publickey for zuul from 192.168.122.10 port 58934 ssh2: ECDSA SHA256:2Vdz7kVNDZnmAnEBdeIC9De7MGoQwU7bxSCyJABiYXo
Oct 09 16:47:56 compute-0 systemd-logind[841]: New session 16 of user zuul.
Oct 09 16:47:57 compute-0 systemd[1]: Started Session 16 of User zuul.
Oct 09 16:47:57 compute-0 sshd-session[159872]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 16:47:57 compute-0 sudo[159876]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Oct 09 16:47:57 compute-0 sudo[159876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:47:57 compute-0 sudo[159876]: pam_unix(sudo:session): session closed for user root
Oct 09 16:47:57 compute-0 sshd-session[159875]: Received disconnect from 192.168.122.10 port 58934:11: disconnected by user
Oct 09 16:47:57 compute-0 sshd-session[159875]: Disconnected from user zuul 192.168.122.10 port 58934
Oct 09 16:47:57 compute-0 sshd-session[159872]: pam_unix(sshd:session): session closed for user zuul
Oct 09 16:47:57 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Oct 09 16:47:57 compute-0 systemd-logind[841]: Session 16 logged out. Waiting for processes to exit.
Oct 09 16:47:57 compute-0 systemd-logind[841]: Removed session 16.
Oct 09 16:47:57 compute-0 nova_compute[117331]: 2025-10-09 16:47:57.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:58 compute-0 nova_compute[117331]: 2025-10-09 16:47:58.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:47:58 compute-0 nova_compute[117331]: 2025-10-09 16:47:58.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:47:59 compute-0 podman[127775]: time="2025-10-09T16:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:47:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:47:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3037 "" "Go-http-client/1.1"
Oct 09 16:48:01 compute-0 openstack_network_exporter[129925]: ERROR   16:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:48:01 compute-0 openstack_network_exporter[129925]: ERROR   16:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:48:01 compute-0 openstack_network_exporter[129925]: ERROR   16:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:48:01 compute-0 openstack_network_exporter[129925]: ERROR   16:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:48:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:48:01 compute-0 openstack_network_exporter[129925]: ERROR   16:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:48:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:48:02 compute-0 nova_compute[117331]: 2025-10-09 16:48:02.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:03 compute-0 nova_compute[117331]: 2025-10-09 16:48:03.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:05 compute-0 podman[159902]: 2025-10-09 16:48:05.890758088 +0000 UTC m=+0.105567137 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, config_id=edpm, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vendor=Red Hat, Inc.)
Oct 09 16:48:05 compute-0 podman[159903]: 2025-10-09 16:48:05.940032602 +0000 UTC m=+0.148867599 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Oct 09 16:48:07 compute-0 nova_compute[117331]: 2025-10-09 16:48:07.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:07 compute-0 systemd[1]: Stopping User Manager for UID 1000...
Oct 09 16:48:07 compute-0 systemd[156464]: Activating special unit Exit the Session...
Oct 09 16:48:07 compute-0 systemd[156464]: Stopped target Main User Target.
Oct 09 16:48:07 compute-0 systemd[156464]: Stopped target Basic System.
Oct 09 16:48:07 compute-0 systemd[156464]: Stopped target Paths.
Oct 09 16:48:07 compute-0 systemd[156464]: Stopped target Sockets.
Oct 09 16:48:07 compute-0 systemd[156464]: Stopped target Timers.
Oct 09 16:48:07 compute-0 systemd[156464]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 09 16:48:07 compute-0 systemd[156464]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 09 16:48:07 compute-0 systemd[156464]: Closed D-Bus User Message Bus Socket.
Oct 09 16:48:07 compute-0 systemd[156464]: Stopped Create User's Volatile Files and Directories.
Oct 09 16:48:07 compute-0 systemd[156464]: Removed slice User Application Slice.
Oct 09 16:48:07 compute-0 systemd[156464]: Reached target Shutdown.
Oct 09 16:48:07 compute-0 systemd[156464]: Finished Exit the Session.
Oct 09 16:48:07 compute-0 systemd[156464]: Reached target Exit the Session.
Oct 09 16:48:07 compute-0 systemd[1]: user@1000.service: Deactivated successfully.
Oct 09 16:48:07 compute-0 systemd[1]: Stopped User Manager for UID 1000.
Oct 09 16:48:07 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/1000...
Oct 09 16:48:07 compute-0 systemd[1]: run-user-1000.mount: Deactivated successfully.
Oct 09 16:48:07 compute-0 systemd[1]: user-runtime-dir@1000.service: Deactivated successfully.
Oct 09 16:48:07 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/1000.
Oct 09 16:48:07 compute-0 systemd[1]: Removed slice User Slice of UID 1000.
Oct 09 16:48:07 compute-0 systemd[1]: user-1000.slice: Consumed 1min 13.518s CPU time, 584.0M memory peak, read 173.1M from disk, written 19.3M to disk.
Oct 09 16:48:07 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 09 16:48:07 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 09 16:48:08 compute-0 nova_compute[117331]: 2025-10-09 16:48:08.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:12 compute-0 nova_compute[117331]: 2025-10-09 16:48:12.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:12 compute-0 sshd-session[159955]: Invalid user  from 196.251.88.103 port 60716
Oct 09 16:48:13 compute-0 nova_compute[117331]: 2025-10-09 16:48:13.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:14 compute-0 nova_compute[117331]: 2025-10-09 16:48:14.834 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:48:14 compute-0 nova_compute[117331]: 2025-10-09 16:48:14.835 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 09 16:48:17 compute-0 nova_compute[117331]: 2025-10-09 16:48:17.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:17 compute-0 podman[159957]: 2025-10-09 16:48:17.847784058 +0000 UTC m=+0.077110832 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 09 16:48:18 compute-0 nova_compute[117331]: 2025-10-09 16:48:18.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:20 compute-0 sshd-session[159955]: Connection closed by invalid user  196.251.88.103 port 60716 [preauth]
Oct 09 16:48:21 compute-0 podman[159977]: 2025-10-09 16:48:21.833344241 +0000 UTC m=+0.064727544 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 09 16:48:22 compute-0 nova_compute[117331]: 2025-10-09 16:48:22.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:23 compute-0 nova_compute[117331]: 2025-10-09 16:48:23.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:27 compute-0 nova_compute[117331]: 2025-10-09 16:48:27.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:27 compute-0 podman[160002]: 2025-10-09 16:48:27.8499184 +0000 UTC m=+0.067234143 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007)
Oct 09 16:48:27 compute-0 podman[160001]: 2025-10-09 16:48:27.853243377 +0000 UTC m=+0.065264180 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 16:48:28 compute-0 nova_compute[117331]: 2025-10-09 16:48:28.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:29 compute-0 podman[127775]: time="2025-10-09T16:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:48:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:48:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3036 "" "Go-http-client/1.1"
Oct 09 16:48:31 compute-0 openstack_network_exporter[129925]: ERROR   16:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:48:31 compute-0 openstack_network_exporter[129925]: ERROR   16:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:48:31 compute-0 openstack_network_exporter[129925]: ERROR   16:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:48:31 compute-0 openstack_network_exporter[129925]: ERROR   16:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:48:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:48:31 compute-0 openstack_network_exporter[129925]: ERROR   16:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:48:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:48:32 compute-0 nova_compute[117331]: 2025-10-09 16:48:32.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:33 compute-0 nova_compute[117331]: 2025-10-09 16:48:33.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:48:35.357 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:48:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:48:35.358 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:48:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:48:35.358 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:48:36 compute-0 podman[160041]: 2025-10-09 16:48:36.876892103 +0000 UTC m=+0.096990510 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, name=ubi9-minimal, io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 09 16:48:36 compute-0 podman[160042]: 2025-10-09 16:48:36.957354961 +0000 UTC m=+0.172239281 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 16:48:37 compute-0 nova_compute[117331]: 2025-10-09 16:48:37.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:38 compute-0 nova_compute[117331]: 2025-10-09 16:48:38.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:40 compute-0 nova_compute[117331]: 2025-10-09 16:48:40.814 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:48:41 compute-0 nova_compute[117331]: 2025-10-09 16:48:41.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:48:42 compute-0 nova_compute[117331]: 2025-10-09 16:48:42.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:43 compute-0 nova_compute[117331]: 2025-10-09 16:48:43.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:45 compute-0 nova_compute[117331]: 2025-10-09 16:48:45.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:48:46 compute-0 nova_compute[117331]: 2025-10-09 16:48:46.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:48:46 compute-0 nova_compute[117331]: 2025-10-09 16:48:46.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:48:47 compute-0 sshd-session[156407]: Connection closed by 120.48.149.106 port 37046 [preauth]
Oct 09 16:48:47 compute-0 nova_compute[117331]: 2025-10-09 16:48:47.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:47 compute-0 nova_compute[117331]: 2025-10-09 16:48:47.302 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:48:47 compute-0 nova_compute[117331]: 2025-10-09 16:48:47.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:48:47 compute-0 nova_compute[117331]: 2025-10-09 16:48:47.818 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:48:47 compute-0 nova_compute[117331]: 2025-10-09 16:48:47.818 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:48:47 compute-0 nova_compute[117331]: 2025-10-09 16:48:47.818 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:48:47 compute-0 nova_compute[117331]: 2025-10-09 16:48:47.818 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:48:47 compute-0 nova_compute[117331]: 2025-10-09 16:48:47.967 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:48:47 compute-0 nova_compute[117331]: 2025-10-09 16:48:47.968 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:48:48 compute-0 nova_compute[117331]: 2025-10-09 16:48:48.007 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.039s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:48:48 compute-0 nova_compute[117331]: 2025-10-09 16:48:48.009 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6019MB free_disk=73.2380256652832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:48:48 compute-0 nova_compute[117331]: 2025-10-09 16:48:48.009 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:48:48 compute-0 nova_compute[117331]: 2025-10-09 16:48:48.010 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:48:48 compute-0 sshd-session[160087]: Invalid user ramp from 120.48.149.106 port 48888
Oct 09 16:48:48 compute-0 nova_compute[117331]: 2025-10-09 16:48:48.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:48 compute-0 podman[160090]: 2025-10-09 16:48:48.355993763 +0000 UTC m=+0.104126081 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 09 16:48:48 compute-0 sshd-session[160087]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:48:48 compute-0 sshd-session[160087]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=120.48.149.106
Oct 09 16:48:49 compute-0 nova_compute[117331]: 2025-10-09 16:48:49.263 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:48:49 compute-0 nova_compute[117331]: 2025-10-09 16:48:49.264 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:48:47 up 57 min,  0 user,  load average: 0.36, 0.39, 0.41\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:48:49 compute-0 nova_compute[117331]: 2025-10-09 16:48:49.294 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:48:49 compute-0 nova_compute[117331]: 2025-10-09 16:48:49.802 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:48:50 compute-0 nova_compute[117331]: 2025-10-09 16:48:50.321 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:48:50 compute-0 nova_compute[117331]: 2025-10-09 16:48:50.322 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.312s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:48:50 compute-0 sshd-session[160087]: Failed password for invalid user ramp from 120.48.149.106 port 48888 ssh2
Oct 09 16:48:52 compute-0 nova_compute[117331]: 2025-10-09 16:48:52.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:52 compute-0 podman[160110]: 2025-10-09 16:48:52.861020212 +0000 UTC m=+0.082656480 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 09 16:48:53 compute-0 nova_compute[117331]: 2025-10-09 16:48:53.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:55 compute-0 nova_compute[117331]: 2025-10-09 16:48:55.323 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:48:55 compute-0 nova_compute[117331]: 2025-10-09 16:48:55.838 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:48:55 compute-0 nova_compute[117331]: 2025-10-09 16:48:55.838 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:48:57 compute-0 nova_compute[117331]: 2025-10-09 16:48:57.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:58 compute-0 nova_compute[117331]: 2025-10-09 16:48:58.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:48:58 compute-0 podman[160134]: 2025-10-09 16:48:58.844684104 +0000 UTC m=+0.071125119 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 09 16:48:58 compute-0 podman[160135]: 2025-10-09 16:48:58.863732976 +0000 UTC m=+0.083021990 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:48:59 compute-0 podman[127775]: time="2025-10-09T16:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:48:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:48:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3035 "" "Go-http-client/1.1"
Oct 09 16:49:01 compute-0 openstack_network_exporter[129925]: ERROR   16:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:49:01 compute-0 openstack_network_exporter[129925]: ERROR   16:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:49:01 compute-0 openstack_network_exporter[129925]: ERROR   16:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:49:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:49:01 compute-0 openstack_network_exporter[129925]: ERROR   16:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:49:01 compute-0 openstack_network_exporter[129925]: ERROR   16:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:49:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:49:02 compute-0 nova_compute[117331]: 2025-10-09 16:49:02.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:03 compute-0 nova_compute[117331]: 2025-10-09 16:49:03.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:07 compute-0 nova_compute[117331]: 2025-10-09 16:49:07.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:07 compute-0 podman[160175]: 2025-10-09 16:49:07.855521617 +0000 UTC m=+0.079496358 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, container_name=openstack_network_exporter, version=9.6, config_id=edpm, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible)
Oct 09 16:49:07 compute-0 podman[160176]: 2025-10-09 16:49:07.915746544 +0000 UTC m=+0.129554657 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 09 16:49:08 compute-0 nova_compute[117331]: 2025-10-09 16:49:08.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:10 compute-0 sshd-session[160172]: error: kex_exchange_identification: read: Connection timed out
Oct 09 16:49:10 compute-0 sshd-session[160172]: banner exchange: Connection from 120.48.149.106 port 50238: Connection timed out
Oct 09 16:49:12 compute-0 nova_compute[117331]: 2025-10-09 16:49:12.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:13 compute-0 nova_compute[117331]: 2025-10-09 16:49:13.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:17 compute-0 nova_compute[117331]: 2025-10-09 16:49:17.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:18 compute-0 nova_compute[117331]: 2025-10-09 16:49:18.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:18 compute-0 podman[160224]: 2025-10-09 16:49:18.854984194 +0000 UTC m=+0.077751683 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Oct 09 16:49:22 compute-0 nova_compute[117331]: 2025-10-09 16:49:22.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:23 compute-0 nova_compute[117331]: 2025-10-09 16:49:23.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:23 compute-0 podman[160244]: 2025-10-09 16:49:23.834706088 +0000 UTC m=+0.064942648 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:49:27 compute-0 nova_compute[117331]: 2025-10-09 16:49:27.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:28 compute-0 nova_compute[117331]: 2025-10-09 16:49:28.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:28 compute-0 sshd-session[160222]: ssh_dispatch_run_fatal: Connection from 120.48.149.106 port 51624: Connection timed out [preauth]
Oct 09 16:49:29 compute-0 podman[127775]: time="2025-10-09T16:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:49:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:49:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3038 "" "Go-http-client/1.1"
Oct 09 16:49:29 compute-0 podman[160268]: 2025-10-09 16:49:29.835218772 +0000 UTC m=+0.057295885 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:49:29 compute-0 podman[160269]: 2025-10-09 16:49:29.857286491 +0000 UTC m=+0.070209139 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 09 16:49:31 compute-0 openstack_network_exporter[129925]: ERROR   16:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:49:31 compute-0 openstack_network_exporter[129925]: ERROR   16:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:49:31 compute-0 openstack_network_exporter[129925]: ERROR   16:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:49:31 compute-0 openstack_network_exporter[129925]: ERROR   16:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:49:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:49:31 compute-0 openstack_network_exporter[129925]: ERROR   16:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:49:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:49:32 compute-0 nova_compute[117331]: 2025-10-09 16:49:32.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:33 compute-0 nova_compute[117331]: 2025-10-09 16:49:33.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:49:35.359 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:49:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:49:35.359 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:49:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:49:35.359 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:49:37 compute-0 nova_compute[117331]: 2025-10-09 16:49:37.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:38 compute-0 nova_compute[117331]: 2025-10-09 16:49:38.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:38 compute-0 podman[160307]: 2025-10-09 16:49:38.851278323 +0000 UTC m=+0.072552515 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, release=1755695350, architecture=x86_64, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 09 16:49:38 compute-0 podman[160308]: 2025-10-09 16:49:38.896703604 +0000 UTC m=+0.116717105 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible)
Oct 09 16:49:41 compute-0 nova_compute[117331]: 2025-10-09 16:49:41.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:49:42 compute-0 nova_compute[117331]: 2025-10-09 16:49:42.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:49:42 compute-0 nova_compute[117331]: 2025-10-09 16:49:42.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:43 compute-0 nova_compute[117331]: 2025-10-09 16:49:43.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:45 compute-0 nova_compute[117331]: 2025-10-09 16:49:45.308 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:49:46 compute-0 nova_compute[117331]: 2025-10-09 16:49:46.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:49:46 compute-0 nova_compute[117331]: 2025-10-09 16:49:46.308 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:49:47 compute-0 nova_compute[117331]: 2025-10-09 16:49:47.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:48 compute-0 nova_compute[117331]: 2025-10-09 16:49:48.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:49 compute-0 nova_compute[117331]: 2025-10-09 16:49:49.301 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:49:49 compute-0 nova_compute[117331]: 2025-10-09 16:49:49.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:49:49 compute-0 nova_compute[117331]: 2025-10-09 16:49:49.816 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:49:49 compute-0 nova_compute[117331]: 2025-10-09 16:49:49.817 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:49:49 compute-0 nova_compute[117331]: 2025-10-09 16:49:49.817 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:49:49 compute-0 nova_compute[117331]: 2025-10-09 16:49:49.817 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:49:49 compute-0 podman[160354]: 2025-10-09 16:49:49.852744095 +0000 UTC m=+0.087735284 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:49:50 compute-0 nova_compute[117331]: 2025-10-09 16:49:50.018 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:49:50 compute-0 nova_compute[117331]: 2025-10-09 16:49:50.020 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:49:50 compute-0 nova_compute[117331]: 2025-10-09 16:49:50.041 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.021s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:49:50 compute-0 nova_compute[117331]: 2025-10-09 16:49:50.042 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6072MB free_disk=73.23800277709961GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:49:50 compute-0 nova_compute[117331]: 2025-10-09 16:49:50.043 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:49:50 compute-0 nova_compute[117331]: 2025-10-09 16:49:50.044 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:49:51 compute-0 nova_compute[117331]: 2025-10-09 16:49:51.174 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:49:51 compute-0 nova_compute[117331]: 2025-10-09 16:49:51.174 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:49:50 up 58 min,  0 user,  load average: 0.19, 0.33, 0.39\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:49:51 compute-0 nova_compute[117331]: 2025-10-09 16:49:51.209 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:49:51 compute-0 nova_compute[117331]: 2025-10-09 16:49:51.722 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:49:52 compute-0 sshd-session[160377]: Invalid user admin from 196.251.88.103 port 49430
Oct 09 16:49:52 compute-0 nova_compute[117331]: 2025-10-09 16:49:52.233 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:49:52 compute-0 nova_compute[117331]: 2025-10-09 16:49:52.233 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.190s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:49:52 compute-0 nova_compute[117331]: 2025-10-09 16:49:52.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:52 compute-0 sshd-session[160377]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:49:52 compute-0 sshd-session[160377]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:49:53 compute-0 nova_compute[117331]: 2025-10-09 16:49:53.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:53 compute-0 sshd-session[160352]: Invalid user testuser from 120.48.149.106 port 53886
Oct 09 16:49:53 compute-0 sshd-session[160352]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:49:53 compute-0 sshd-session[160352]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=120.48.149.106
Oct 09 16:49:54 compute-0 podman[160379]: 2025-10-09 16:49:54.100198654 +0000 UTC m=+0.094212969 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 09 16:49:55 compute-0 nova_compute[117331]: 2025-10-09 16:49:55.239 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:49:55 compute-0 nova_compute[117331]: 2025-10-09 16:49:55.240 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:49:55 compute-0 sshd-session[160377]: Failed password for invalid user admin from 196.251.88.103 port 49430 ssh2
Oct 09 16:49:55 compute-0 sshd-session[160352]: Failed password for invalid user testuser from 120.48.149.106 port 53886 ssh2
Oct 09 16:49:57 compute-0 nova_compute[117331]: 2025-10-09 16:49:57.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:57 compute-0 sshd-session[160377]: Connection closed by invalid user admin 196.251.88.103 port 49430 [preauth]
Oct 09 16:49:58 compute-0 nova_compute[117331]: 2025-10-09 16:49:58.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:49:59 compute-0 podman[127775]: time="2025-10-09T16:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:49:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:49:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3038 "" "Go-http-client/1.1"
Oct 09 16:50:00 compute-0 systemd[1]: Starting system activity accounting tool...
Oct 09 16:50:00 compute-0 sshd-session[160403]: Invalid user dolphinscheduler from 196.251.88.103 port 43802
Oct 09 16:50:00 compute-0 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 09 16:50:00 compute-0 systemd[1]: Finished system activity accounting tool.
Oct 09 16:50:00 compute-0 podman[160405]: 2025-10-09 16:50:00.411046625 +0000 UTC m=+0.090334634 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 09 16:50:00 compute-0 podman[160406]: 2025-10-09 16:50:00.41619674 +0000 UTC m=+0.091192253 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007)
Oct 09 16:50:01 compute-0 sshd-session[160403]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:50:01 compute-0 sshd-session[160403]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:50:01 compute-0 openstack_network_exporter[129925]: ERROR   16:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:50:01 compute-0 openstack_network_exporter[129925]: ERROR   16:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:50:01 compute-0 openstack_network_exporter[129925]: ERROR   16:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:50:01 compute-0 openstack_network_exporter[129925]: ERROR   16:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:50:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:50:01 compute-0 openstack_network_exporter[129925]: ERROR   16:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:50:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:50:02 compute-0 nova_compute[117331]: 2025-10-09 16:50:02.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:02 compute-0 sshd-session[160403]: Failed password for invalid user dolphinscheduler from 196.251.88.103 port 43802 ssh2
Oct 09 16:50:03 compute-0 nova_compute[117331]: 2025-10-09 16:50:03.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:05 compute-0 sshd-session[160403]: Connection closed by invalid user dolphinscheduler 196.251.88.103 port 43802 [preauth]
Oct 09 16:50:07 compute-0 nova_compute[117331]: 2025-10-09 16:50:07.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:08 compute-0 nova_compute[117331]: 2025-10-09 16:50:08.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:09 compute-0 sshd-session[160444]: Invalid user dev from 196.251.88.103 port 38170
Oct 09 16:50:09 compute-0 podman[160447]: 2025-10-09 16:50:09.220223413 +0000 UTC m=+0.090186940 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, name=ubi9-minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64)
Oct 09 16:50:09 compute-0 podman[160448]: 2025-10-09 16:50:09.244974971 +0000 UTC m=+0.111619452 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 09 16:50:09 compute-0 sshd-session[160444]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:50:09 compute-0 sshd-session[160444]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:50:12 compute-0 sshd-session[160444]: Failed password for invalid user dev from 196.251.88.103 port 38170 ssh2
Oct 09 16:50:12 compute-0 nova_compute[117331]: 2025-10-09 16:50:12.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:13 compute-0 sshd-session[160444]: Connection closed by invalid user dev 196.251.88.103 port 38170 [preauth]
Oct 09 16:50:13 compute-0 nova_compute[117331]: 2025-10-09 16:50:13.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:15 compute-0 sshd-session[160493]: Invalid user deploy from 196.251.88.103 port 60766
Oct 09 16:50:15 compute-0 sshd-session[160493]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:50:15 compute-0 sshd-session[160493]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:50:17 compute-0 sshd-session[160493]: Failed password for invalid user deploy from 196.251.88.103 port 60766 ssh2
Oct 09 16:50:17 compute-0 nova_compute[117331]: 2025-10-09 16:50:17.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:17 compute-0 sshd-session[160493]: Connection closed by invalid user deploy 196.251.88.103 port 60766 [preauth]
Oct 09 16:50:18 compute-0 nova_compute[117331]: 2025-10-09 16:50:18.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:20 compute-0 podman[160497]: 2025-10-09 16:50:20.858177292 +0000 UTC m=+0.087852016 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 09 16:50:22 compute-0 nova_compute[117331]: 2025-10-09 16:50:22.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:23 compute-0 nova_compute[117331]: 2025-10-09 16:50:23.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:23 compute-0 unix_chkpwd[160518]: password check failed for user (ftp)
Oct 09 16:50:23 compute-0 sshd-session[160495]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103  user=ftp
Oct 09 16:50:24 compute-0 podman[160519]: 2025-10-09 16:50:24.840803387 +0000 UTC m=+0.073691557 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:50:25 compute-0 sshd-session[160495]: Failed password for ftp from 196.251.88.103 port 55124 ssh2
Oct 09 16:50:26 compute-0 sshd-session[160495]: Connection closed by authenticating user ftp 196.251.88.103 port 55124 [preauth]
Oct 09 16:50:27 compute-0 nova_compute[117331]: 2025-10-09 16:50:27.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:28 compute-0 sshd-session[160542]: Invalid user plex from 196.251.88.103 port 49492
Oct 09 16:50:28 compute-0 sshd-session[160542]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:50:28 compute-0 sshd-session[160542]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:50:28 compute-0 nova_compute[117331]: 2025-10-09 16:50:28.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:29 compute-0 podman[127775]: time="2025-10-09T16:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:50:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:50:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3031 "" "Go-http-client/1.1"
Oct 09 16:50:30 compute-0 podman[160545]: 2025-10-09 16:50:30.879675636 +0000 UTC m=+0.097346037 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251007, container_name=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 16:50:30 compute-0 podman[160544]: 2025-10-09 16:50:30.879864922 +0000 UTC m=+0.103042078 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 09 16:50:31 compute-0 sshd-session[160542]: Failed password for invalid user plex from 196.251.88.103 port 49492 ssh2
Oct 09 16:50:31 compute-0 openstack_network_exporter[129925]: ERROR   16:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:50:31 compute-0 openstack_network_exporter[129925]: ERROR   16:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:50:31 compute-0 openstack_network_exporter[129925]: ERROR   16:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:50:31 compute-0 openstack_network_exporter[129925]: ERROR   16:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:50:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:50:31 compute-0 openstack_network_exporter[129925]: ERROR   16:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:50:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:50:32 compute-0 nova_compute[117331]: 2025-10-09 16:50:32.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:32 compute-0 sshd-session[160542]: Connection closed by invalid user plex 196.251.88.103 port 49492 [preauth]
Oct 09 16:50:33 compute-0 sshd-session[160352]: ssh_dispatch_run_fatal: Connection from invalid user testuser 120.48.149.106 port 53886: Connection timed out [preauth]
Oct 09 16:50:33 compute-0 nova_compute[117331]: 2025-10-09 16:50:33.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:34 compute-0 sshd-session[160583]: Invalid user dmdba from 196.251.88.103 port 43856
Oct 09 16:50:34 compute-0 sshd-session[160583]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:50:34 compute-0 sshd-session[160583]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:50:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:50:35.360 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:50:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:50:35.360 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:50:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:50:35.361 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:50:36 compute-0 sshd-session[160583]: Failed password for invalid user dmdba from 196.251.88.103 port 43856 ssh2
Oct 09 16:50:37 compute-0 nova_compute[117331]: 2025-10-09 16:50:37.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:37 compute-0 sshd-session[160583]: Connection closed by invalid user dmdba 196.251.88.103 port 43856 [preauth]
Oct 09 16:50:38 compute-0 nova_compute[117331]: 2025-10-09 16:50:38.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:39 compute-0 sshd-session[160586]: Invalid user esuser from 196.251.88.103 port 38220
Oct 09 16:50:39 compute-0 podman[160588]: 2025-10-09 16:50:39.864164971 +0000 UTC m=+0.083719584 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public)
Oct 09 16:50:39 compute-0 podman[160589]: 2025-10-09 16:50:39.875313506 +0000 UTC m=+0.097220354 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 09 16:50:39 compute-0 sshd-session[160586]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:50:39 compute-0 sshd-session[160586]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:50:42 compute-0 nova_compute[117331]: 2025-10-09 16:50:42.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:50:42 compute-0 nova_compute[117331]: 2025-10-09 16:50:42.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:42 compute-0 sshd-session[160586]: Failed password for invalid user esuser from 196.251.88.103 port 38220 ssh2
Oct 09 16:50:43 compute-0 sshd-session[160586]: Connection closed by invalid user esuser 196.251.88.103 port 38220 [preauth]
Oct 09 16:50:43 compute-0 nova_compute[117331]: 2025-10-09 16:50:43.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:50:43 compute-0 nova_compute[117331]: 2025-10-09 16:50:43.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:45 compute-0 nova_compute[117331]: 2025-10-09 16:50:45.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:50:45 compute-0 unix_chkpwd[160633]: password check failed for user (root)
Oct 09 16:50:45 compute-0 sshd-session[160631]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103  user=root
Oct 09 16:50:47 compute-0 nova_compute[117331]: 2025-10-09 16:50:47.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:50:47 compute-0 nova_compute[117331]: 2025-10-09 16:50:47.306 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:50:47 compute-0 nova_compute[117331]: 2025-10-09 16:50:47.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:47 compute-0 sshd-session[160631]: Failed password for root from 196.251.88.103 port 60812 ssh2
Oct 09 16:50:48 compute-0 sshd-session[160631]: Connection closed by authenticating user root 196.251.88.103 port 60812 [preauth]
Oct 09 16:50:48 compute-0 nova_compute[117331]: 2025-10-09 16:50:48.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:49 compute-0 sshd[52903]: Timeout before authentication for connection from 120.48.149.106 to 38.102.83.110, pid = 160087
Oct 09 16:50:50 compute-0 nova_compute[117331]: 2025-10-09 16:50:50.302 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:50:50 compute-0 nova_compute[117331]: 2025-10-09 16:50:50.305 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:50:50 compute-0 nova_compute[117331]: 2025-10-09 16:50:50.819 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:50:50 compute-0 nova_compute[117331]: 2025-10-09 16:50:50.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:50:50 compute-0 nova_compute[117331]: 2025-10-09 16:50:50.820 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:50:50 compute-0 nova_compute[117331]: 2025-10-09 16:50:50.820 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:50:51 compute-0 nova_compute[117331]: 2025-10-09 16:50:51.001 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:50:51 compute-0 nova_compute[117331]: 2025-10-09 16:50:51.002 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:50:51 compute-0 nova_compute[117331]: 2025-10-09 16:50:51.037 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.034s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:50:51 compute-0 nova_compute[117331]: 2025-10-09 16:50:51.037 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6085MB free_disk=73.23800277709961GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:50:51 compute-0 nova_compute[117331]: 2025-10-09 16:50:51.038 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:50:51 compute-0 nova_compute[117331]: 2025-10-09 16:50:51.038 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:50:51 compute-0 podman[160635]: 2025-10-09 16:50:51.866280533 +0000 UTC m=+0.095101676 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 09 16:50:52 compute-0 nova_compute[117331]: 2025-10-09 16:50:52.173 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:50:52 compute-0 nova_compute[117331]: 2025-10-09 16:50:52.173 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:50:51 up 59 min,  0 user,  load average: 0.07, 0.27, 0.36\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:50:52 compute-0 nova_compute[117331]: 2025-10-09 16:50:52.328 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:50:52 compute-0 unix_chkpwd[160658]: password check failed for user (root)
Oct 09 16:50:52 compute-0 sshd-session[160646]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103  user=root
Oct 09 16:50:52 compute-0 nova_compute[117331]: 2025-10-09 16:50:52.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:52 compute-0 nova_compute[117331]: 2025-10-09 16:50:52.895 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:50:53 compute-0 nova_compute[117331]: 2025-10-09 16:50:53.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:53 compute-0 nova_compute[117331]: 2025-10-09 16:50:53.406 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:50:53 compute-0 nova_compute[117331]: 2025-10-09 16:50:53.407 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.369s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:50:54 compute-0 sshd-session[160646]: Failed password for root from 196.251.88.103 port 55178 ssh2
Oct 09 16:50:55 compute-0 nova_compute[117331]: 2025-10-09 16:50:55.408 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:50:55 compute-0 nova_compute[117331]: 2025-10-09 16:50:55.409 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:50:55 compute-0 podman[160659]: 2025-10-09 16:50:55.857327246 +0000 UTC m=+0.079559102 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:50:56 compute-0 nova_compute[117331]: 2025-10-09 16:50:56.303 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:50:56 compute-0 sshd-session[160646]: Connection closed by authenticating user root 196.251.88.103 port 55178 [preauth]
Oct 09 16:50:57 compute-0 nova_compute[117331]: 2025-10-09 16:50:57.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:58 compute-0 sshd-session[160683]: Invalid user oracle from 196.251.88.103 port 49548
Oct 09 16:50:58 compute-0 sshd-session[160683]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:50:58 compute-0 sshd-session[160683]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:50:58 compute-0 nova_compute[117331]: 2025-10-09 16:50:58.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:50:59 compute-0 podman[127775]: time="2025-10-09T16:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:50:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:50:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3037 "" "Go-http-client/1.1"
Oct 09 16:51:00 compute-0 sshd-session[160683]: Failed password for invalid user oracle from 196.251.88.103 port 49548 ssh2
Oct 09 16:51:01 compute-0 openstack_network_exporter[129925]: ERROR   16:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:51:01 compute-0 openstack_network_exporter[129925]: ERROR   16:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:51:01 compute-0 openstack_network_exporter[129925]: ERROR   16:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:51:01 compute-0 openstack_network_exporter[129925]: ERROR   16:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:51:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:51:01 compute-0 openstack_network_exporter[129925]: ERROR   16:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:51:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:51:01 compute-0 podman[160685]: 2025-10-09 16:51:01.522929449 +0000 UTC m=+0.067930142 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 09 16:51:01 compute-0 podman[160686]: 2025-10-09 16:51:01.535757288 +0000 UTC m=+0.072036183 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible)
Oct 09 16:51:02 compute-0 sshd-session[160683]: Connection closed by invalid user oracle 196.251.88.103 port 49548 [preauth]
Oct 09 16:51:02 compute-0 nova_compute[117331]: 2025-10-09 16:51:02.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:03 compute-0 nova_compute[117331]: 2025-10-09 16:51:03.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:04 compute-0 unix_chkpwd[160726]: password check failed for user (root)
Oct 09 16:51:04 compute-0 sshd-session[160724]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103  user=root
Oct 09 16:51:05 compute-0 nova_compute[117331]: 2025-10-09 16:51:05.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:51:05.474 28613 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '4a:65:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '32:24:1e:e3:5d:e8'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 09 16:51:05 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:51:05.476 28613 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 09 16:51:06 compute-0 sshd-session[160724]: Failed password for root from 196.251.88.103 port 43918 ssh2
Oct 09 16:51:07 compute-0 nova_compute[117331]: 2025-10-09 16:51:07.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:08 compute-0 nova_compute[117331]: 2025-10-09 16:51:08.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:09 compute-0 sshd-session[160724]: Connection closed by authenticating user root 196.251.88.103 port 43918 [preauth]
Oct 09 16:51:10 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:51:10.478 28613 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9954897f-aa83-45dd-8e84-289816676c2a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 16:51:10 compute-0 podman[160730]: 2025-10-09 16:51:10.838613501 +0000 UTC m=+0.068034985 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git)
Oct 09 16:51:10 compute-0 podman[160731]: 2025-10-09 16:51:10.86623768 +0000 UTC m=+0.090495700 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 09 16:51:11 compute-0 unix_chkpwd[160776]: password check failed for user (root)
Oct 09 16:51:11 compute-0 sshd-session[160728]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103  user=root
Oct 09 16:51:12 compute-0 nova_compute[117331]: 2025-10-09 16:51:12.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:13 compute-0 nova_compute[117331]: 2025-10-09 16:51:13.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:13 compute-0 sshd-session[160728]: Failed password for root from 196.251.88.103 port 38282 ssh2
Oct 09 16:51:13 compute-0 sshd-session[160728]: Connection closed by authenticating user root 196.251.88.103 port 38282 [preauth]
Oct 09 16:51:17 compute-0 sshd-session[160779]: Invalid user myuser from 196.251.88.103 port 60880
Oct 09 16:51:17 compute-0 nova_compute[117331]: 2025-10-09 16:51:17.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:17 compute-0 sshd-session[160779]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:51:17 compute-0 sshd-session[160779]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:51:18 compute-0 nova_compute[117331]: 2025-10-09 16:51:18.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:19 compute-0 sshd-session[160779]: Failed password for invalid user myuser from 196.251.88.103 port 60880 ssh2
Oct 09 16:51:19 compute-0 unix_chkpwd[160783]: password check failed for user (root)
Oct 09 16:51:19 compute-0 sshd-session[160781]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 09 16:51:20 compute-0 sshd-session[160779]: Connection closed by invalid user myuser 196.251.88.103 port 60880 [preauth]
Oct 09 16:51:21 compute-0 sshd-session[160781]: Failed password for root from 193.46.255.103 port 56784 ssh2
Oct 09 16:51:22 compute-0 unix_chkpwd[160784]: password check failed for user (root)
Oct 09 16:51:22 compute-0 nova_compute[117331]: 2025-10-09 16:51:22.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:22 compute-0 podman[160785]: 2025-10-09 16:51:22.840536739 +0000 UTC m=+0.065599059 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 09 16:51:23 compute-0 nova_compute[117331]: 2025-10-09 16:51:23.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:23 compute-0 sshd-session[160806]: Invalid user app from 196.251.88.103 port 55242
Oct 09 16:51:23 compute-0 sshd-session[160806]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:51:23 compute-0 sshd-session[160806]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:51:24 compute-0 sshd-session[160781]: Failed password for root from 193.46.255.103 port 56784 ssh2
Oct 09 16:51:25 compute-0 sshd-session[160806]: Failed password for invalid user app from 196.251.88.103 port 55242 ssh2
Oct 09 16:51:26 compute-0 unix_chkpwd[160808]: password check failed for user (root)
Oct 09 16:51:26 compute-0 podman[160809]: 2025-10-09 16:51:26.857836137 +0000 UTC m=+0.080289586 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 09 16:51:27 compute-0 sshd-session[160806]: Connection closed by invalid user app 196.251.88.103 port 55242 [preauth]
Oct 09 16:51:27 compute-0 nova_compute[117331]: 2025-10-09 16:51:27.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:28 compute-0 nova_compute[117331]: 2025-10-09 16:51:28.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:28 compute-0 sshd-session[160781]: Failed password for root from 193.46.255.103 port 56784 ssh2
Oct 09 16:51:29 compute-0 podman[127775]: time="2025-10-09T16:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:51:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:51:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3039 "" "Go-http-client/1.1"
Oct 09 16:51:30 compute-0 sshd-session[160833]: Invalid user rocky from 196.251.88.103 port 49608
Oct 09 16:51:30 compute-0 sshd-session[160833]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:51:30 compute-0 sshd-session[160833]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:51:30 compute-0 sshd-session[160781]: Received disconnect from 193.46.255.103 port 56784:11:  [preauth]
Oct 09 16:51:30 compute-0 sshd-session[160781]: Disconnected from authenticating user root 193.46.255.103 port 56784 [preauth]
Oct 09 16:51:30 compute-0 sshd-session[160781]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 09 16:51:31 compute-0 openstack_network_exporter[129925]: ERROR   16:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:51:31 compute-0 openstack_network_exporter[129925]: ERROR   16:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:51:31 compute-0 openstack_network_exporter[129925]: ERROR   16:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:51:31 compute-0 openstack_network_exporter[129925]: ERROR   16:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:51:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:51:31 compute-0 openstack_network_exporter[129925]: ERROR   16:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:51:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:51:31 compute-0 unix_chkpwd[160837]: password check failed for user (root)
Oct 09 16:51:31 compute-0 sshd-session[160835]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 09 16:51:31 compute-0 sshd-session[160833]: Failed password for invalid user rocky from 196.251.88.103 port 49608 ssh2
Oct 09 16:51:31 compute-0 podman[160838]: 2025-10-09 16:51:31.848267732 +0000 UTC m=+0.068476719 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 09 16:51:31 compute-0 podman[160839]: 2025-10-09 16:51:31.885637991 +0000 UTC m=+0.105999183 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 09 16:51:32 compute-0 nova_compute[117331]: 2025-10-09 16:51:32.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:32 compute-0 sshd-session[160833]: Connection closed by invalid user rocky 196.251.88.103 port 49608 [preauth]
Oct 09 16:51:32 compute-0 sshd-session[160835]: Failed password for root from 193.46.255.103 port 37210 ssh2
Oct 09 16:51:33 compute-0 nova_compute[117331]: 2025-10-09 16:51:33.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:33 compute-0 unix_chkpwd[160877]: password check failed for user (root)
Oct 09 16:51:34 compute-0 systemd[1]: Starting dnf makecache...
Oct 09 16:51:34 compute-0 sshd-session[160878]: Invalid user appuser from 196.251.88.103 port 43974
Oct 09 16:51:35 compute-0 sshd-session[160878]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:51:35 compute-0 sshd-session[160878]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:51:35 compute-0 dnf[160880]: Repository 'gating-repo' is missing name in configuration, using id.
Oct 09 16:51:35 compute-0 dnf[160880]: Metadata cache refreshed recently.
Oct 09 16:51:35 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 09 16:51:35 compute-0 systemd[1]: Finished dnf makecache.
Oct 09 16:51:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:51:35.363 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:51:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:51:35.364 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:51:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:51:35.364 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:51:35 compute-0 sshd-session[160835]: Failed password for root from 193.46.255.103 port 37210 ssh2
Oct 09 16:51:35 compute-0 unix_chkpwd[160882]: password check failed for user (root)
Oct 09 16:51:37 compute-0 sshd-session[160878]: Failed password for invalid user appuser from 196.251.88.103 port 43974 ssh2
Oct 09 16:51:37 compute-0 nova_compute[117331]: 2025-10-09 16:51:37.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:37 compute-0 sshd-session[160878]: Connection closed by invalid user appuser 196.251.88.103 port 43974 [preauth]
Oct 09 16:51:37 compute-0 sshd-session[160835]: Failed password for root from 193.46.255.103 port 37210 ssh2
Oct 09 16:51:38 compute-0 sshd-session[160835]: Received disconnect from 193.46.255.103 port 37210:11:  [preauth]
Oct 09 16:51:38 compute-0 sshd-session[160835]: Disconnected from authenticating user root 193.46.255.103 port 37210 [preauth]
Oct 09 16:51:38 compute-0 sshd-session[160835]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 09 16:51:38 compute-0 nova_compute[117331]: 2025-10-09 16:51:38.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:38 compute-0 unix_chkpwd[160885]: password check failed for user (root)
Oct 09 16:51:38 compute-0 sshd-session[160777]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=170.64.149.169  user=root
Oct 09 16:51:39 compute-0 unix_chkpwd[160886]: password check failed for user (root)
Oct 09 16:51:39 compute-0 sshd-session[160883]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 09 16:51:40 compute-0 sshd-session[160883]: Failed password for root from 193.46.255.103 port 24140 ssh2
Oct 09 16:51:41 compute-0 sshd-session[160777]: Failed password for root from 170.64.149.169 port 34064 ssh2
Oct 09 16:51:41 compute-0 unix_chkpwd[160889]: password check failed for user (root)
Oct 09 16:51:41 compute-0 sshd-session[160887]: Invalid user tom from 196.251.88.103 port 38338
Oct 09 16:51:41 compute-0 podman[160890]: 2025-10-09 16:51:41.492017531 +0000 UTC m=+0.076261147 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, distribution-scope=public, vendor=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 09 16:51:41 compute-0 podman[160891]: 2025-10-09 16:51:41.514016201 +0000 UTC m=+0.092499134 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 09 16:51:41 compute-0 sshd-session[160887]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:51:41 compute-0 sshd-session[160887]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:51:42 compute-0 nova_compute[117331]: 2025-10-09 16:51:42.308 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:51:42 compute-0 nova_compute[117331]: 2025-10-09 16:51:42.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:43 compute-0 nova_compute[117331]: 2025-10-09 16:51:43.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:51:43 compute-0 nova_compute[117331]: 2025-10-09 16:51:43.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:43 compute-0 sshd-session[160883]: Failed password for root from 193.46.255.103 port 24140 ssh2
Oct 09 16:51:43 compute-0 sshd-session[160887]: Failed password for invalid user tom from 196.251.88.103 port 38338 ssh2
Oct 09 16:51:45 compute-0 unix_chkpwd[160934]: password check failed for user (root)
Oct 09 16:51:46 compute-0 sshd-session[160887]: Connection closed by invalid user tom 196.251.88.103 port 38338 [preauth]
Oct 09 16:51:47 compute-0 nova_compute[117331]: 2025-10-09 16:51:47.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:51:47 compute-0 nova_compute[117331]: 2025-10-09 16:51:47.308 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:51:47 compute-0 nova_compute[117331]: 2025-10-09 16:51:47.308 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:51:47 compute-0 nova_compute[117331]: 2025-10-09 16:51:47.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:47 compute-0 sshd-session[160883]: Failed password for root from 193.46.255.103 port 24140 ssh2
Oct 09 16:51:47 compute-0 sshd-session[160883]: Received disconnect from 193.46.255.103 port 24140:11:  [preauth]
Oct 09 16:51:47 compute-0 sshd-session[160883]: Disconnected from authenticating user root 193.46.255.103 port 24140 [preauth]
Oct 09 16:51:47 compute-0 sshd-session[160883]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 09 16:51:48 compute-0 nova_compute[117331]: 2025-10-09 16:51:48.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:48 compute-0 sshd-session[160935]: Invalid user dev from 196.251.88.103 port 60934
Oct 09 16:51:49 compute-0 sshd-session[160935]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:51:49 compute-0 sshd-session[160935]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:51:51 compute-0 nova_compute[117331]: 2025-10-09 16:51:51.304 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:51:51 compute-0 sshd-session[160935]: Failed password for invalid user dev from 196.251.88.103 port 60934 ssh2
Oct 09 16:51:52 compute-0 nova_compute[117331]: 2025-10-09 16:51:52.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:51:52 compute-0 nova_compute[117331]: 2025-10-09 16:51:52.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:52 compute-0 sshd-session[160935]: Connection closed by invalid user dev 196.251.88.103 port 60934 [preauth]
Oct 09 16:51:52 compute-0 nova_compute[117331]: 2025-10-09 16:51:52.818 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:51:52 compute-0 nova_compute[117331]: 2025-10-09 16:51:52.819 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:51:52 compute-0 nova_compute[117331]: 2025-10-09 16:51:52.819 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:51:52 compute-0 nova_compute[117331]: 2025-10-09 16:51:52.820 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:51:53 compute-0 nova_compute[117331]: 2025-10-09 16:51:53.056 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:51:53 compute-0 nova_compute[117331]: 2025-10-09 16:51:53.057 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:51:53 compute-0 nova_compute[117331]: 2025-10-09 16:51:53.086 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.028s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:51:53 compute-0 nova_compute[117331]: 2025-10-09 16:51:53.087 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6078MB free_disk=73.23813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:51:53 compute-0 nova_compute[117331]: 2025-10-09 16:51:53.087 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:51:53 compute-0 nova_compute[117331]: 2025-10-09 16:51:53.087 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:51:53 compute-0 nova_compute[117331]: 2025-10-09 16:51:53.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:53 compute-0 podman[160940]: 2025-10-09 16:51:53.862792901 +0000 UTC m=+0.082898750 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 09 16:51:54 compute-0 nova_compute[117331]: 2025-10-09 16:51:54.133 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:51:54 compute-0 nova_compute[117331]: 2025-10-09 16:51:54.133 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:51:53 up  1:00,  0 user,  load average: 0.10, 0.24, 0.35\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:51:54 compute-0 nova_compute[117331]: 2025-10-09 16:51:54.151 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing inventories for resource provider 593051b8-2000-437f-a915-2616fc8b1671 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 09 16:51:54 compute-0 nova_compute[117331]: 2025-10-09 16:51:54.167 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating ProviderTree inventory for provider 593051b8-2000-437f-a915-2616fc8b1671 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 09 16:51:54 compute-0 nova_compute[117331]: 2025-10-09 16:51:54.167 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Updating inventory in ProviderTree for provider 593051b8-2000-437f-a915-2616fc8b1671 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 09 16:51:54 compute-0 nova_compute[117331]: 2025-10-09 16:51:54.186 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing aggregate associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 09 16:51:54 compute-0 nova_compute[117331]: 2025-10-09 16:51:54.205 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Refreshing trait associations for resource provider 593051b8-2000-437f-a915-2616fc8b1671, traits: HW_CPU_X86_AVX2,HW_CPU_X86_BMI,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ARCH_X86_64,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_TIS,HW_ARCH_X86_64,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOUND_MODEL_AC97,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_RESCUE_BFV,COMPUTE_SOUND_MODEL_SB16,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 09 16:51:54 compute-0 nova_compute[117331]: 2025-10-09 16:51:54.227 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:51:54 compute-0 nova_compute[117331]: 2025-10-09 16:51:54.735 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:51:54 compute-0 sshd-session[160937]: Invalid user server from 196.251.88.103 port 55298
Oct 09 16:51:55 compute-0 nova_compute[117331]: 2025-10-09 16:51:55.246 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:51:55 compute-0 nova_compute[117331]: 2025-10-09 16:51:55.247 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.159s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:51:55 compute-0 sshd-session[160937]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:51:55 compute-0 sshd-session[160937]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:51:56 compute-0 sshd-session[160937]: Failed password for invalid user server from 196.251.88.103 port 55298 ssh2
Oct 09 16:51:57 compute-0 nova_compute[117331]: 2025-10-09 16:51:57.247 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:51:57 compute-0 nova_compute[117331]: 2025-10-09 16:51:57.248 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:51:57 compute-0 nova_compute[117331]: 2025-10-09 16:51:57.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:57 compute-0 sshd-session[160937]: Connection closed by invalid user server 196.251.88.103 port 55298 [preauth]
Oct 09 16:51:57 compute-0 podman[160960]: 2025-10-09 16:51:57.823724876 +0000 UTC m=+0.060031751 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 09 16:51:58 compute-0 nova_compute[117331]: 2025-10-09 16:51:58.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:51:59 compute-0 podman[127775]: time="2025-10-09T16:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:51:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:51:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3035 "" "Go-http-client/1.1"
Oct 09 16:52:00 compute-0 sshd-session[160777]: Connection closed by authenticating user root 170.64.149.169 port 34064 [preauth]
Oct 09 16:52:01 compute-0 openstack_network_exporter[129925]: ERROR   16:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:52:01 compute-0 openstack_network_exporter[129925]: ERROR   16:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:52:01 compute-0 openstack_network_exporter[129925]: ERROR   16:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:52:01 compute-0 openstack_network_exporter[129925]: ERROR   16:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:52:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:52:01 compute-0 openstack_network_exporter[129925]: ERROR   16:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:52:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:52:01 compute-0 sshd-session[160986]: Invalid user test from 196.251.88.103 port 49660
Oct 09 16:52:02 compute-0 sshd-session[160986]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:52:02 compute-0 sshd-session[160986]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:52:02 compute-0 podman[160989]: 2025-10-09 16:52:02.276898391 +0000 UTC m=+0.088222249 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 16:52:02 compute-0 podman[160988]: 2025-10-09 16:52:02.290763002 +0000 UTC m=+0.098921669 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 09 16:52:02 compute-0 nova_compute[117331]: 2025-10-09 16:52:02.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:03 compute-0 nova_compute[117331]: 2025-10-09 16:52:03.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:03 compute-0 sshd-session[160986]: Failed password for invalid user test from 196.251.88.103 port 49660 ssh2
Oct 09 16:52:05 compute-0 sshd-session[160986]: Connection closed by invalid user test 196.251.88.103 port 49660 [preauth]
Oct 09 16:52:05 compute-0 sshd[52903]: drop connection #2 from [120.48.149.106]:40918 on [38.102.83.110]:22 penalty: exceeded LoginGraceTime
Oct 09 16:52:06 compute-0 sshd[52903]: drop connection #2 from [120.48.149.106]:41732 on [38.102.83.110]:22 penalty: exceeded LoginGraceTime
Oct 09 16:52:07 compute-0 nova_compute[117331]: 2025-10-09 16:52:07.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:07 compute-0 unix_chkpwd[161030]: password check failed for user (root)
Oct 09 16:52:07 compute-0 sshd-session[161028]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103  user=root
Oct 09 16:52:08 compute-0 nova_compute[117331]: 2025-10-09 16:52:08.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:08 compute-0 sshd[52903]: Timeout before authentication for connection from 120.48.149.106 to 38.102.83.110, pid = 160443
Oct 09 16:52:09 compute-0 sshd-session[161028]: Failed password for root from 196.251.88.103 port 44024 ssh2
Oct 09 16:52:09 compute-0 sshd-session[161028]: Connection closed by authenticating user root 196.251.88.103 port 44024 [preauth]
Oct 09 16:52:11 compute-0 podman[161031]: 2025-10-09 16:52:11.889486648 +0000 UTC m=+0.112435658 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 09 16:52:11 compute-0 podman[161032]: 2025-10-09 16:52:11.937297539 +0000 UTC m=+0.155537169 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4)
Oct 09 16:52:12 compute-0 nova_compute[117331]: 2025-10-09 16:52:12.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:13 compute-0 sshd-session[161077]: Invalid user deploy from 196.251.88.103 port 38388
Oct 09 16:52:13 compute-0 sshd-session[161077]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:52:13 compute-0 sshd-session[161077]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:52:13 compute-0 nova_compute[117331]: 2025-10-09 16:52:13.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:15 compute-0 sshd-session[161077]: Failed password for invalid user deploy from 196.251.88.103 port 38388 ssh2
Oct 09 16:52:15 compute-0 sshd-session[161077]: Connection closed by invalid user deploy 196.251.88.103 port 38388 [preauth]
Oct 09 16:52:17 compute-0 nova_compute[117331]: 2025-10-09 16:52:17.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:18 compute-0 nova_compute[117331]: 2025-10-09 16:52:18.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:19 compute-0 unix_chkpwd[161081]: password check failed for user (root)
Oct 09 16:52:19 compute-0 sshd-session[161079]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103  user=root
Oct 09 16:52:20 compute-0 sshd-session[161079]: Failed password for root from 196.251.88.103 port 60990 ssh2
Oct 09 16:52:21 compute-0 sshd-session[161079]: Connection closed by authenticating user root 196.251.88.103 port 60990 [preauth]
Oct 09 16:52:22 compute-0 nova_compute[117331]: 2025-10-09 16:52:22.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:23 compute-0 nova_compute[117331]: 2025-10-09 16:52:23.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:24 compute-0 podman[161082]: 2025-10-09 16:52:24.836749257 +0000 UTC m=+0.070751012 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, container_name=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct 09 16:52:25 compute-0 unix_chkpwd[161104]: password check failed for user (root)
Oct 09 16:52:25 compute-0 sshd-session[161102]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103  user=root
Oct 09 16:52:27 compute-0 nova_compute[117331]: 2025-10-09 16:52:27.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:27 compute-0 sshd[52903]: drop connection #2 from [120.48.149.106]:41914 on [38.102.83.110]:22 penalty: exceeded LoginGraceTime
Oct 09 16:52:27 compute-0 sshd-session[161102]: Failed password for root from 196.251.88.103 port 55354 ssh2
Oct 09 16:52:28 compute-0 nova_compute[117331]: 2025-10-09 16:52:28.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:28 compute-0 podman[161106]: 2025-10-09 16:52:28.843109658 +0000 UTC m=+0.074985677 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 09 16:52:29 compute-0 podman[127775]: time="2025-10-09T16:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:52:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:52:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3032 "" "Go-http-client/1.1"
Oct 09 16:52:29 compute-0 sshd-session[161102]: Connection closed by authenticating user root 196.251.88.103 port 55354 [preauth]
Oct 09 16:52:31 compute-0 openstack_network_exporter[129925]: ERROR   16:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:52:31 compute-0 openstack_network_exporter[129925]: ERROR   16:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:52:31 compute-0 openstack_network_exporter[129925]: ERROR   16:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:52:31 compute-0 openstack_network_exporter[129925]: ERROR   16:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:52:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:52:31 compute-0 openstack_network_exporter[129925]: ERROR   16:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:52:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:52:31 compute-0 unix_chkpwd[161133]: password check failed for user (root)
Oct 09 16:52:31 compute-0 sshd-session[161131]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103  user=root
Oct 09 16:52:32 compute-0 nova_compute[117331]: 2025-10-09 16:52:32.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:32 compute-0 podman[161134]: 2025-10-09 16:52:32.835777673 +0000 UTC m=+0.059584527 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 09 16:52:32 compute-0 podman[161135]: 2025-10-09 16:52:32.840431741 +0000 UTC m=+0.067311523 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 09 16:52:33 compute-0 sshd-session[161131]: Failed password for root from 196.251.88.103 port 49716 ssh2
Oct 09 16:52:33 compute-0 nova_compute[117331]: 2025-10-09 16:52:33.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:33 compute-0 sshd-session[161131]: Connection closed by authenticating user root 196.251.88.103 port 49716 [preauth]
Oct 09 16:52:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:52:35.365 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:52:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:52:35.365 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:52:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:52:35.366 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:52:37 compute-0 nova_compute[117331]: 2025-10-09 16:52:37.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:37 compute-0 sshd-session[161177]: Invalid user odoo from 196.251.88.103 port 44080
Oct 09 16:52:37 compute-0 sshd-session[161177]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:52:37 compute-0 sshd-session[161177]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:52:38 compute-0 nova_compute[117331]: 2025-10-09 16:52:38.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:39 compute-0 sshd-session[161177]: Failed password for invalid user odoo from 196.251.88.103 port 44080 ssh2
Oct 09 16:52:40 compute-0 sshd-session[161177]: Connection closed by invalid user odoo 196.251.88.103 port 44080 [preauth]
Oct 09 16:52:42 compute-0 nova_compute[117331]: 2025-10-09 16:52:42.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:52:42 compute-0 nova_compute[117331]: 2025-10-09 16:52:42.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:42 compute-0 podman[161179]: 2025-10-09 16:52:42.829602878 +0000 UTC m=+0.058472091 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git)
Oct 09 16:52:42 compute-0 podman[161180]: 2025-10-09 16:52:42.891778686 +0000 UTC m=+0.113156411 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_managed=true)
Oct 09 16:52:43 compute-0 nova_compute[117331]: 2025-10-09 16:52:43.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:43 compute-0 sshd-session[161227]: Invalid user guest from 196.251.88.103 port 38446
Oct 09 16:52:43 compute-0 sshd-session[161227]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:52:43 compute-0 sshd-session[161227]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:52:45 compute-0 nova_compute[117331]: 2025-10-09 16:52:45.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:52:45 compute-0 sshd-session[161227]: Failed password for invalid user guest from 196.251.88.103 port 38446 ssh2
Oct 09 16:52:45 compute-0 sshd-session[161227]: Connection closed by invalid user guest 196.251.88.103 port 38446 [preauth]
Oct 09 16:52:47 compute-0 nova_compute[117331]: 2025-10-09 16:52:47.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:48 compute-0 nova_compute[117331]: 2025-10-09 16:52:48.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:49 compute-0 nova_compute[117331]: 2025-10-09 16:52:49.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:52:49 compute-0 nova_compute[117331]: 2025-10-09 16:52:49.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:52:49 compute-0 nova_compute[117331]: 2025-10-09 16:52:49.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:52:49 compute-0 unix_chkpwd[161231]: password check failed for user (root)
Oct 09 16:52:49 compute-0 sshd-session[161229]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103  user=root
Oct 09 16:52:51 compute-0 sshd-session[160984]: Connection reset by 170.64.149.169 port 43648 [preauth]
Oct 09 16:52:51 compute-0 nova_compute[117331]: 2025-10-09 16:52:51.302 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:52:51 compute-0 sshd-session[161229]: Failed password for root from 196.251.88.103 port 34546 ssh2
Oct 09 16:52:52 compute-0 nova_compute[117331]: 2025-10-09 16:52:52.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:53 compute-0 unix_chkpwd[161234]: password check failed for user (root)
Oct 09 16:52:53 compute-0 sshd-session[161232]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=170.64.149.169  user=root
Oct 09 16:52:53 compute-0 nova_compute[117331]: 2025-10-09 16:52:53.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:53 compute-0 sshd-session[161229]: Connection closed by authenticating user root 196.251.88.103 port 34546 [preauth]
Oct 09 16:52:54 compute-0 nova_compute[117331]: 2025-10-09 16:52:54.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:52:54 compute-0 nova_compute[117331]: 2025-10-09 16:52:54.832 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:52:54 compute-0 nova_compute[117331]: 2025-10-09 16:52:54.833 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:52:54 compute-0 nova_compute[117331]: 2025-10-09 16:52:54.833 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:52:54 compute-0 nova_compute[117331]: 2025-10-09 16:52:54.833 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 09 16:52:55 compute-0 nova_compute[117331]: 2025-10-09 16:52:55.031 2 WARNING nova.virt.libvirt.driver [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 16:52:55 compute-0 nova_compute[117331]: 2025-10-09 16:52:55.033 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 09 16:52:55 compute-0 nova_compute[117331]: 2025-10-09 16:52:55.052 2 DEBUG oslo_concurrency.processutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.019s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 09 16:52:55 compute-0 nova_compute[117331]: 2025-10-09 16:52:55.053 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6095MB free_disk=73.23811340332031GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 09 16:52:55 compute-0 nova_compute[117331]: 2025-10-09 16:52:55.053 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:52:55 compute-0 nova_compute[117331]: 2025-10-09 16:52:55.053 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:52:55 compute-0 sshd-session[161235]: Invalid user niaoyun from 196.251.88.103 port 55416
Oct 09 16:52:55 compute-0 podman[161238]: 2025-10-09 16:52:55.403462879 +0000 UTC m=+0.076017110 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 09 16:52:55 compute-0 sshd-session[161235]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:52:55 compute-0 sshd-session[161235]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:52:55 compute-0 sshd-session[161232]: Failed password for root from 170.64.149.169 port 41878 ssh2
Oct 09 16:52:56 compute-0 nova_compute[117331]: 2025-10-09 16:52:56.095 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 09 16:52:56 compute-0 nova_compute[117331]: 2025-10-09 16:52:56.096 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:52:55 up  1:01,  0 user,  load average: 0.04, 0.20, 0.32\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 09 16:52:56 compute-0 nova_compute[117331]: 2025-10-09 16:52:56.118 2 DEBUG nova.compute.provider_tree [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed in ProviderTree for provider: 593051b8-2000-437f-a915-2616fc8b1671 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 09 16:52:56 compute-0 nova_compute[117331]: 2025-10-09 16:52:56.626 2 DEBUG nova.scheduler.client.report [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Inventory has not changed for provider 593051b8-2000-437f-a915-2616fc8b1671 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 09 16:52:57 compute-0 sshd-session[161235]: Failed password for invalid user niaoyun from 196.251.88.103 port 55416 ssh2
Oct 09 16:52:57 compute-0 nova_compute[117331]: 2025-10-09 16:52:57.134 2 DEBUG nova.compute.resource_tracker [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 09 16:52:57 compute-0 nova_compute[117331]: 2025-10-09 16:52:57.135 2 DEBUG oslo_concurrency.lockutils [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.081s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:52:57 compute-0 sshd-session[161235]: Connection closed by invalid user niaoyun 196.251.88.103 port 55416 [preauth]
Oct 09 16:52:57 compute-0 nova_compute[117331]: 2025-10-09 16:52:57.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:57 compute-0 unix_chkpwd[161258]: password check failed for user (root)
Oct 09 16:52:58 compute-0 nova_compute[117331]: 2025-10-09 16:52:58.135 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:52:58 compute-0 nova_compute[117331]: 2025-10-09 16:52:58.135 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:52:58 compute-0 nova_compute[117331]: 2025-10-09 16:52:58.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:52:58 compute-0 nova_compute[117331]: 2025-10-09 16:52:58.307 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 09 16:52:58 compute-0 nova_compute[117331]: 2025-10-09 16:52:58.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:52:58 compute-0 nova_compute[117331]: 2025-10-09 16:52:58.813 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 09 16:52:59 compute-0 podman[127775]: time="2025-10-09T16:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:52:59 compute-0 sshd-session[161232]: Failed password for root from 170.64.149.169 port 41878 ssh2
Oct 09 16:52:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:52:59 compute-0 podman[127775]: @ - - [09/Oct/2025:16:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3035 "" "Go-http-client/1.1"
Oct 09 16:52:59 compute-0 podman[161259]: 2025-10-09 16:52:59.842097996 +0000 UTC m=+0.071583538 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 09 16:53:00 compute-0 unix_chkpwd[161284]: password check failed for user (root)
Oct 09 16:53:00 compute-0 nova_compute[117331]: 2025-10-09 16:53:00.307 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:53:00 compute-0 nova_compute[117331]: 2025-10-09 16:53:00.818 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:53:01 compute-0 sshd-session[161285]: Invalid user testuser from 196.251.88.103 port 49782
Oct 09 16:53:01 compute-0 sshd-session[161285]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:53:01 compute-0 sshd-session[161285]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:53:01 compute-0 openstack_network_exporter[129925]: ERROR   16:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:53:01 compute-0 openstack_network_exporter[129925]: ERROR   16:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:53:01 compute-0 openstack_network_exporter[129925]: ERROR   16:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:53:01 compute-0 openstack_network_exporter[129925]: ERROR   16:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:53:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:53:01 compute-0 openstack_network_exporter[129925]: ERROR   16:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:53:01 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:53:02 compute-0 sshd-session[161232]: Failed password for root from 170.64.149.169 port 41878 ssh2
Oct 09 16:53:02 compute-0 nova_compute[117331]: 2025-10-09 16:53:02.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:02 compute-0 unix_chkpwd[161287]: password check failed for user (root)
Oct 09 16:53:03 compute-0 sshd-session[161285]: Failed password for invalid user testuser from 196.251.88.103 port 49782 ssh2
Oct 09 16:53:03 compute-0 nova_compute[117331]: 2025-10-09 16:53:03.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:03 compute-0 podman[161289]: 2025-10-09 16:53:03.833078271 +0000 UTC m=+0.057091868 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 09 16:53:03 compute-0 podman[161288]: 2025-10-09 16:53:03.833533365 +0000 UTC m=+0.064472013 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 09 16:53:04 compute-0 sshd-session[161232]: Failed password for root from 170.64.149.169 port 41878 ssh2
Oct 09 16:53:04 compute-0 unix_chkpwd[161327]: password check failed for user (root)
Oct 09 16:53:05 compute-0 sshd-session[161285]: Connection closed by invalid user testuser 196.251.88.103 port 49782 [preauth]
Oct 09 16:53:06 compute-0 sshd-session[161232]: Failed password for root from 170.64.149.169 port 41878 ssh2
Oct 09 16:53:07 compute-0 unix_chkpwd[161330]: password check failed for user (root)
Oct 09 16:53:07 compute-0 nova_compute[117331]: 2025-10-09 16:53:07.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:07 compute-0 sshd-session[161328]: Invalid user runner from 196.251.88.103 port 44144
Oct 09 16:53:07 compute-0 sshd-session[161328]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:53:07 compute-0 sshd-session[161328]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:53:08 compute-0 nova_compute[117331]: 2025-10-09 16:53:08.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:08 compute-0 sshd-session[161232]: Failed password for root from 170.64.149.169 port 41878 ssh2
Oct 09 16:53:09 compute-0 sshd-session[161232]: error: maximum authentication attempts exceeded for root from 170.64.149.169 port 41878 ssh2 [preauth]
Oct 09 16:53:09 compute-0 sshd-session[161232]: Disconnecting authenticating user root 170.64.149.169 port 41878: Too many authentication failures [preauth]
Oct 09 16:53:09 compute-0 sshd-session[161232]: PAM 5 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=170.64.149.169  user=root
Oct 09 16:53:09 compute-0 sshd-session[161232]: PAM service(sshd) ignoring max retries; 6 > 3
Oct 09 16:53:09 compute-0 sshd-session[161328]: Failed password for invalid user runner from 196.251.88.103 port 44144 ssh2
Oct 09 16:53:10 compute-0 unix_chkpwd[161333]: password check failed for user (root)
Oct 09 16:53:10 compute-0 sshd-session[161331]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=170.64.149.169  user=root
Oct 09 16:53:11 compute-0 sshd[52903]: drop connection #2 from [120.48.149.106]:48118 on [38.102.83.110]:22 penalty: exceeded LoginGraceTime
Oct 09 16:53:11 compute-0 sshd[52903]: drop connection #2 from [120.48.149.106]:48658 on [38.102.83.110]:22 penalty: exceeded LoginGraceTime
Oct 09 16:53:11 compute-0 sshd-session[161328]: Connection closed by invalid user runner 196.251.88.103 port 44144 [preauth]
Oct 09 16:53:12 compute-0 nova_compute[117331]: 2025-10-09 16:53:12.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:13 compute-0 sshd-session[161331]: Failed password for root from 170.64.149.169 port 58786 ssh2
Oct 09 16:53:13 compute-0 nova_compute[117331]: 2025-10-09 16:53:13.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:13 compute-0 unix_chkpwd[161336]: password check failed for user (root)
Oct 09 16:53:13 compute-0 sshd-session[161334]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103  user=root
Oct 09 16:53:13 compute-0 podman[161337]: 2025-10-09 16:53:13.839012608 +0000 UTC m=+0.066849109 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, version=9.6, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Oct 09 16:53:13 compute-0 podman[161338]: 2025-10-09 16:53:13.89784566 +0000 UTC m=+0.118585155 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_managed=true)
Oct 09 16:53:15 compute-0 unix_chkpwd[161384]: password check failed for user (root)
Oct 09 16:53:15 compute-0 sshd-session[161334]: Failed password for root from 196.251.88.103 port 38506 ssh2
Oct 09 16:53:16 compute-0 sshd-session[161331]: Failed password for root from 170.64.149.169 port 58786 ssh2
Oct 09 16:53:17 compute-0 unix_chkpwd[161385]: password check failed for user (root)
Oct 09 16:53:17 compute-0 nova_compute[117331]: 2025-10-09 16:53:17.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:17 compute-0 sshd-session[161334]: Connection closed by authenticating user root 196.251.88.103 port 38506 [preauth]
Oct 09 16:53:18 compute-0 nova_compute[117331]: 2025-10-09 16:53:18.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:19 compute-0 unix_chkpwd[161388]: password check failed for user (root)
Oct 09 16:53:19 compute-0 sshd-session[161386]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103  user=root
Oct 09 16:53:19 compute-0 sshd-session[161331]: Failed password for root from 170.64.149.169 port 58786 ssh2
Oct 09 16:53:21 compute-0 sshd-session[161386]: Failed password for root from 196.251.88.103 port 32872 ssh2
Oct 09 16:53:21 compute-0 unix_chkpwd[161389]: password check failed for user (root)
Oct 09 16:53:22 compute-0 nova_compute[117331]: 2025-10-09 16:53:22.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:23 compute-0 sshd-session[161386]: Connection closed by authenticating user root 196.251.88.103 port 32872 [preauth]
Oct 09 16:53:23 compute-0 nova_compute[117331]: 2025-10-09 16:53:23.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:23 compute-0 sshd-session[161331]: Failed password for root from 170.64.149.169 port 58786 ssh2
Oct 09 16:53:24 compute-0 sshd-session[161390]: Invalid user www from 196.251.88.103 port 55468
Oct 09 16:53:24 compute-0 sshd-session[161390]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:53:24 compute-0 sshd-session[161390]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:53:25 compute-0 podman[161392]: 2025-10-09 16:53:25.847667448 +0000 UTC m=+0.077951161 container health_status 270830b5167cafbab735d8118980f26987e673cd0fa464dd2202394830bcca66 (image=38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 09 16:53:26 compute-0 unix_chkpwd[161412]: password check failed for user (root)
Oct 09 16:53:26 compute-0 nova_compute[117331]: 2025-10-09 16:53:26.810 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:53:26 compute-0 nova_compute[117331]: 2025-10-09 16:53:26.810 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 09 16:53:26 compute-0 sshd-session[161390]: Failed password for invalid user www from 196.251.88.103 port 55468 ssh2
Oct 09 16:53:27 compute-0 nova_compute[117331]: 2025-10-09 16:53:27.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:27 compute-0 sshd-session[161390]: Connection closed by invalid user www 196.251.88.103 port 55468 [preauth]
Oct 09 16:53:28 compute-0 sshd-session[161331]: Failed password for root from 170.64.149.169 port 58786 ssh2
Oct 09 16:53:28 compute-0 unix_chkpwd[161413]: password check failed for user (root)
Oct 09 16:53:28 compute-0 nova_compute[117331]: 2025-10-09 16:53:28.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:29 compute-0 podman[127775]: time="2025-10-09T16:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 09 16:53:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19526 "" "Go-http-client/1.1"
Oct 09 16:53:29 compute-0 podman[127775]: @ - - [09/Oct/2025:16:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3033 "" "Go-http-client/1.1"
Oct 09 16:53:30 compute-0 sshd-session[161331]: Failed password for root from 170.64.149.169 port 58786 ssh2
Oct 09 16:53:30 compute-0 podman[161416]: 2025-10-09 16:53:30.840005044 +0000 UTC m=+0.064677909 container health_status 86aa93e887899e54d383b0fc17fb3c5637680d1d8e146a130a557c837f0260fe (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 09 16:53:31 compute-0 openstack_network_exporter[129925]: ERROR   16:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:53:31 compute-0 openstack_network_exporter[129925]: ERROR   16:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 09 16:53:31 compute-0 openstack_network_exporter[129925]: ERROR   16:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 09 16:53:31 compute-0 openstack_network_exporter[129925]: ERROR   16:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 09 16:53:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:53:31 compute-0 openstack_network_exporter[129925]: ERROR   16:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 09 16:53:31 compute-0 openstack_network_exporter[129925]: 
Oct 09 16:53:31 compute-0 unix_chkpwd[161440]: password check failed for user (root)
Oct 09 16:53:31 compute-0 sshd-session[161414]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103  user=root
Oct 09 16:53:32 compute-0 nova_compute[117331]: 2025-10-09 16:53:32.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:32 compute-0 sshd-session[161331]: error: maximum authentication attempts exceeded for root from 170.64.149.169 port 58786 ssh2 [preauth]
Oct 09 16:53:32 compute-0 sshd-session[161331]: Disconnecting authenticating user root 170.64.149.169 port 58786: Too many authentication failures [preauth]
Oct 09 16:53:32 compute-0 sshd-session[161331]: PAM 5 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=170.64.149.169  user=root
Oct 09 16:53:32 compute-0 sshd-session[161331]: PAM service(sshd) ignoring max retries; 6 > 3
Oct 09 16:53:33 compute-0 nova_compute[117331]: 2025-10-09 16:53:33.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:33 compute-0 sshd-session[161414]: Failed password for root from 196.251.88.103 port 49830 ssh2
Oct 09 16:53:34 compute-0 unix_chkpwd[161443]: password check failed for user (root)
Oct 09 16:53:34 compute-0 sshd-session[161441]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=170.64.149.169  user=root
Oct 09 16:53:34 compute-0 podman[161444]: 2025-10-09 16:53:34.805217748 +0000 UTC m=+0.043139973 container health_status 0e966c7a283689daf23c5db5017f23f685ee836319240188a9f70ddd246563c5 (image=38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:53:34 compute-0 podman[161445]: 2025-10-09 16:53:34.826949441 +0000 UTC m=+0.059154024 container health_status 8227728d23f374cb9528be6ddf01dff38d8c792912a17b0d137932a18390b820 (image=38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 09 16:53:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:53:35.367 28613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 09 16:53:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:53:35.367 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 09 16:53:35 compute-0 ovn_metadata_agent[28608]: 2025-10-09 16:53:35.367 28613 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 09 16:53:35 compute-0 sshd-session[161414]: Connection closed by authenticating user root 196.251.88.103 port 49830 [preauth]
Oct 09 16:53:36 compute-0 sshd-session[161441]: Failed password for root from 170.64.149.169 port 53032 ssh2
Oct 09 16:53:36 compute-0 unix_chkpwd[161483]: password check failed for user (root)
Oct 09 16:53:37 compute-0 nova_compute[117331]: 2025-10-09 16:53:37.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:37 compute-0 sshd-session[161484]: Invalid user user from 196.251.88.103 port 44198
Oct 09 16:53:37 compute-0 sshd-session[161484]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:53:37 compute-0 sshd-session[161484]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:53:38 compute-0 sshd-session[161441]: Failed password for root from 170.64.149.169 port 53032 ssh2
Oct 09 16:53:38 compute-0 nova_compute[117331]: 2025-10-09 16:53:38.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:38 compute-0 unix_chkpwd[161486]: password check failed for user (root)
Oct 09 16:53:39 compute-0 sshd-session[161484]: Failed password for invalid user user from 196.251.88.103 port 44198 ssh2
Oct 09 16:53:39 compute-0 sshd-session[161487]: Accepted publickey for zuul from 192.168.122.10 port 41070 ssh2: ECDSA SHA256:2Vdz7kVNDZnmAnEBdeIC9De7MGoQwU7bxSCyJABiYXo
Oct 09 16:53:39 compute-0 systemd[1]: Created slice User Slice of UID 1000.
Oct 09 16:53:39 compute-0 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 09 16:53:39 compute-0 systemd-logind[841]: New session 17 of user zuul.
Oct 09 16:53:39 compute-0 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 09 16:53:39 compute-0 systemd[1]: Starting User Manager for UID 1000...
Oct 09 16:53:39 compute-0 systemd[161491]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 16:53:39 compute-0 sshd-session[161484]: Connection closed by invalid user user 196.251.88.103 port 44198 [preauth]
Oct 09 16:53:39 compute-0 systemd[161491]: Queued start job for default target Main User Target.
Oct 09 16:53:39 compute-0 systemd[161491]: Created slice User Application Slice.
Oct 09 16:53:39 compute-0 systemd[161491]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 09 16:53:39 compute-0 systemd[161491]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 16:53:39 compute-0 systemd[161491]: Reached target Paths.
Oct 09 16:53:39 compute-0 systemd[161491]: Reached target Timers.
Oct 09 16:53:39 compute-0 systemd[161491]: Starting D-Bus User Message Bus Socket...
Oct 09 16:53:39 compute-0 systemd[161491]: Starting Create User's Volatile Files and Directories...
Oct 09 16:53:39 compute-0 systemd[161491]: Listening on D-Bus User Message Bus Socket.
Oct 09 16:53:39 compute-0 systemd[161491]: Reached target Sockets.
Oct 09 16:53:39 compute-0 systemd[161491]: Finished Create User's Volatile Files and Directories.
Oct 09 16:53:39 compute-0 systemd[161491]: Reached target Basic System.
Oct 09 16:53:39 compute-0 systemd[161491]: Reached target Main User Target.
Oct 09 16:53:39 compute-0 systemd[161491]: Startup finished in 172ms.
Oct 09 16:53:39 compute-0 systemd[1]: Started User Manager for UID 1000.
Oct 09 16:53:39 compute-0 systemd[1]: Started Session 17 of User zuul.
Oct 09 16:53:39 compute-0 sshd-session[161487]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 16:53:39 compute-0 sudo[161507]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 09 16:53:39 compute-0 sudo[161507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 16:53:41 compute-0 sshd-session[161441]: Failed password for root from 170.64.149.169 port 53032 ssh2
Oct 09 16:53:42 compute-0 sshd-session[161645]: Invalid user root1 from 196.251.88.103 port 38564
Oct 09 16:53:42 compute-0 nova_compute[117331]: 2025-10-09 16:53:42.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:42 compute-0 sshd-session[161645]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:53:42 compute-0 sshd-session[161645]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:53:42 compute-0 nova_compute[117331]: 2025-10-09 16:53:42.812 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:53:43 compute-0 unix_chkpwd[161653]: password check failed for user (root)
Oct 09 16:53:43 compute-0 nova_compute[117331]: 2025-10-09 16:53:43.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:44 compute-0 sshd-session[161645]: Failed password for invalid user root1 from 196.251.88.103 port 38564 ssh2
Oct 09 16:53:44 compute-0 ovs-vsctl[161683]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 09 16:53:44 compute-0 podman[161705]: 2025-10-09 16:53:44.839920152 +0000 UTC m=+0.068871002 container health_status 64fa251112d01abb34ec1bb8e22019815ddcaaaa391ce5ec91517147ac5a9994 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, distribution-scope=public, container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 09 16:53:44 compute-0 podman[161706]: 2025-10-09 16:53:44.885240154 +0000 UTC m=+0.107992367 container health_status d615e4f24ff62fbb14be4a63e6da568ec5b22309773c1c66103b6b4de64a8a2f (image=38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.66:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_controller)
Oct 09 16:53:45 compute-0 sshd-session[161441]: Failed password for root from 170.64.149.169 port 53032 ssh2
Oct 09 16:53:45 compute-0 sshd-session[161645]: Connection closed by invalid user root1 196.251.88.103 port 38564 [preauth]
Oct 09 16:53:45 compute-0 virtqemud[117629]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 09 16:53:45 compute-0 virtqemud[117629]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 09 16:53:45 compute-0 virtqemud[117629]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 09 16:53:46 compute-0 nova_compute[117331]: 2025-10-09 16:53:46.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:53:46 compute-0 crontab[162158]: (root) LIST (root)
Oct 09 16:53:47 compute-0 nova_compute[117331]: 2025-10-09 16:53:47.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:47 compute-0 unix_chkpwd[162231]: password check failed for user (root)
Oct 09 16:53:48 compute-0 sshd-session[162235]: Invalid user admin from 196.251.88.103 port 32928
Oct 09 16:53:48 compute-0 nova_compute[117331]: 2025-10-09 16:53:48.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:48 compute-0 sshd-session[162235]: pam_unix(sshd:auth): check pass; user unknown
Oct 09 16:53:48 compute-0 sshd-session[162235]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.88.103
Oct 09 16:53:48 compute-0 systemd[1]: Starting Hostname Service...
Oct 09 16:53:49 compute-0 systemd[1]: Started Hostname Service.
Oct 09 16:53:49 compute-0 nova_compute[117331]: 2025-10-09 16:53:49.308 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:53:50 compute-0 sshd-session[161441]: Failed password for root from 170.64.149.169 port 53032 ssh2
Oct 09 16:53:50 compute-0 sshd-session[162235]: Failed password for invalid user admin from 196.251.88.103 port 32928 ssh2
Oct 09 16:53:50 compute-0 sshd-session[162235]: Connection closed by invalid user admin 196.251.88.103 port 32928 [preauth]
Oct 09 16:53:51 compute-0 nova_compute[117331]: 2025-10-09 16:53:51.306 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:53:51 compute-0 nova_compute[117331]: 2025-10-09 16:53:51.306 2 DEBUG nova.compute.manager [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 09 16:53:51 compute-0 unix_chkpwd[162456]: password check failed for user (root)
Oct 09 16:53:52 compute-0 nova_compute[117331]: 2025-10-09 16:53:52.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:53 compute-0 nova_compute[117331]: 2025-10-09 16:53:53.302 2 DEBUG oslo_service.periodic_task [None req-92a13bed-c081-4c7b-87f3-bafebefb79f2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 09 16:53:53 compute-0 nova_compute[117331]: 2025-10-09 16:53:53.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 09 16:53:53 compute-0 sshd-session[161441]: Failed password for root from 170.64.149.169 port 53032 ssh2
